Dec 11 07:59:03 np0005555520 kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 11 07:59:03 np0005555520 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 11 07:59:03 np0005555520 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 07:59:03 np0005555520 kernel: BIOS-provided physical RAM map:
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 11 07:59:03 np0005555520 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 11 07:59:03 np0005555520 kernel: NX (Execute Disable) protection: active
Dec 11 07:59:03 np0005555520 kernel: APIC: Static calls initialized
Dec 11 07:59:03 np0005555520 kernel: SMBIOS 2.8 present.
Dec 11 07:59:03 np0005555520 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 11 07:59:03 np0005555520 kernel: Hypervisor detected: KVM
Dec 11 07:59:03 np0005555520 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 11 07:59:03 np0005555520 kernel: kvm-clock: using sched offset of 4416600610 cycles
Dec 11 07:59:03 np0005555520 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 11 07:59:03 np0005555520 kernel: tsc: Detected 2799.998 MHz processor
Dec 11 07:59:03 np0005555520 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 11 07:59:03 np0005555520 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 11 07:59:03 np0005555520 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 11 07:59:03 np0005555520 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 11 07:59:03 np0005555520 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 11 07:59:03 np0005555520 kernel: Using GB pages for direct mapping
Dec 11 07:59:03 np0005555520 kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 11 07:59:03 np0005555520 kernel: ACPI: Early table checksum verification disabled
Dec 11 07:59:03 np0005555520 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 11 07:59:03 np0005555520 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 07:59:03 np0005555520 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 07:59:03 np0005555520 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 07:59:03 np0005555520 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 11 07:59:03 np0005555520 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 07:59:03 np0005555520 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 07:59:03 np0005555520 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 11 07:59:03 np0005555520 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 11 07:59:03 np0005555520 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 11 07:59:03 np0005555520 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 11 07:59:03 np0005555520 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 11 07:59:03 np0005555520 kernel: No NUMA configuration found
Dec 11 07:59:03 np0005555520 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 11 07:59:03 np0005555520 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 11 07:59:03 np0005555520 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 11 07:59:03 np0005555520 kernel: Zone ranges:
Dec 11 07:59:03 np0005555520 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 11 07:59:03 np0005555520 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 11 07:59:03 np0005555520 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 11 07:59:03 np0005555520 kernel:  Device   empty
Dec 11 07:59:03 np0005555520 kernel: Movable zone start for each node
Dec 11 07:59:03 np0005555520 kernel: Early memory node ranges
Dec 11 07:59:03 np0005555520 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 11 07:59:03 np0005555520 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 11 07:59:03 np0005555520 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 11 07:59:03 np0005555520 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 11 07:59:03 np0005555520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 11 07:59:03 np0005555520 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 11 07:59:03 np0005555520 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 11 07:59:03 np0005555520 kernel: ACPI: PM-Timer IO Port: 0x608
Dec 11 07:59:03 np0005555520 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 11 07:59:03 np0005555520 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 11 07:59:03 np0005555520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 11 07:59:03 np0005555520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 11 07:59:03 np0005555520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 11 07:59:03 np0005555520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 11 07:59:03 np0005555520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 11 07:59:03 np0005555520 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 11 07:59:03 np0005555520 kernel: TSC deadline timer available
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Max. logical packages:   8
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Max. logical dies:       8
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Max. dies per package:   1
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Max. threads per core:   1
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Num. cores per package:     1
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Num. threads per package:   1
Dec 11 07:59:03 np0005555520 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 11 07:59:03 np0005555520 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 11 07:59:03 np0005555520 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 11 07:59:03 np0005555520 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 11 07:59:03 np0005555520 kernel: Booting paravirtualized kernel on KVM
Dec 11 07:59:03 np0005555520 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 11 07:59:03 np0005555520 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 11 07:59:03 np0005555520 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 11 07:59:03 np0005555520 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 11 07:59:03 np0005555520 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 07:59:03 np0005555520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 11 07:59:03 np0005555520 kernel: random: crng init done
Dec 11 07:59:03 np0005555520 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: Fallback order for Node 0: 0 
Dec 11 07:59:03 np0005555520 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 11 07:59:03 np0005555520 kernel: Policy zone: Normal
Dec 11 07:59:03 np0005555520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 11 07:59:03 np0005555520 kernel: software IO TLB: area num 8.
Dec 11 07:59:03 np0005555520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 11 07:59:03 np0005555520 kernel: ftrace: allocating 49357 entries in 193 pages
Dec 11 07:59:03 np0005555520 kernel: ftrace: allocated 193 pages with 3 groups
Dec 11 07:59:03 np0005555520 kernel: Dynamic Preempt: voluntary
Dec 11 07:59:03 np0005555520 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 11 07:59:03 np0005555520 kernel: rcu: #011RCU event tracing is enabled.
Dec 11 07:59:03 np0005555520 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 11 07:59:03 np0005555520 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec 11 07:59:03 np0005555520 kernel: #011Rude variant of Tasks RCU enabled.
Dec 11 07:59:03 np0005555520 kernel: #011Tracing variant of Tasks RCU enabled.
Dec 11 07:59:03 np0005555520 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 11 07:59:03 np0005555520 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 11 07:59:03 np0005555520 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 07:59:03 np0005555520 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 07:59:03 np0005555520 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 07:59:03 np0005555520 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 11 07:59:03 np0005555520 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 11 07:59:03 np0005555520 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 11 07:59:03 np0005555520 kernel: Console: colour VGA+ 80x25
Dec 11 07:59:03 np0005555520 kernel: printk: console [ttyS0] enabled
Dec 11 07:59:03 np0005555520 kernel: ACPI: Core revision 20230331
Dec 11 07:59:03 np0005555520 kernel: APIC: Switch to symmetric I/O mode setup
Dec 11 07:59:03 np0005555520 kernel: x2apic enabled
Dec 11 07:59:03 np0005555520 kernel: APIC: Switched APIC routing to: physical x2apic
Dec 11 07:59:03 np0005555520 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 11 07:59:03 np0005555520 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 11 07:59:03 np0005555520 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 11 07:59:03 np0005555520 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 11 07:59:03 np0005555520 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 11 07:59:03 np0005555520 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 11 07:59:03 np0005555520 kernel: Spectre V2 : Mitigation: Retpolines
Dec 11 07:59:03 np0005555520 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 11 07:59:03 np0005555520 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 11 07:59:03 np0005555520 kernel: RETBleed: Mitigation: untrained return thunk
Dec 11 07:59:03 np0005555520 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 11 07:59:03 np0005555520 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 11 07:59:03 np0005555520 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 11 07:59:03 np0005555520 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 11 07:59:03 np0005555520 kernel: x86/bugs: return thunk changed
Dec 11 07:59:03 np0005555520 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 11 07:59:03 np0005555520 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 11 07:59:03 np0005555520 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 11 07:59:03 np0005555520 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 11 07:59:03 np0005555520 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 11 07:59:03 np0005555520 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 11 07:59:03 np0005555520 kernel: Freeing SMP alternatives memory: 40K
Dec 11 07:59:03 np0005555520 kernel: pid_max: default: 32768 minimum: 301
Dec 11 07:59:03 np0005555520 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 11 07:59:03 np0005555520 kernel: landlock: Up and running.
Dec 11 07:59:03 np0005555520 kernel: Yama: becoming mindful.
Dec 11 07:59:03 np0005555520 kernel: SELinux:  Initializing.
Dec 11 07:59:03 np0005555520 kernel: LSM support for eBPF active
Dec 11 07:59:03 np0005555520 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 11 07:59:03 np0005555520 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 11 07:59:03 np0005555520 kernel: ... version:                0
Dec 11 07:59:03 np0005555520 kernel: ... bit width:              48
Dec 11 07:59:03 np0005555520 kernel: ... generic registers:      6
Dec 11 07:59:03 np0005555520 kernel: ... value mask:             0000ffffffffffff
Dec 11 07:59:03 np0005555520 kernel: ... max period:             00007fffffffffff
Dec 11 07:59:03 np0005555520 kernel: ... fixed-purpose events:   0
Dec 11 07:59:03 np0005555520 kernel: ... event mask:             000000000000003f
Dec 11 07:59:03 np0005555520 kernel: signal: max sigframe size: 1776
Dec 11 07:59:03 np0005555520 kernel: rcu: Hierarchical SRCU implementation.
Dec 11 07:59:03 np0005555520 kernel: rcu: #011Max phase no-delay instances is 400.
Dec 11 07:59:03 np0005555520 kernel: smp: Bringing up secondary CPUs ...
Dec 11 07:59:03 np0005555520 kernel: smpboot: x86: Booting SMP configuration:
Dec 11 07:59:03 np0005555520 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 11 07:59:03 np0005555520 kernel: smp: Brought up 1 node, 8 CPUs
Dec 11 07:59:03 np0005555520 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 11 07:59:03 np0005555520 kernel: node 0 deferred pages initialised in 31ms
Dec 11 07:59:03 np0005555520 kernel: Memory: 7763984K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618228K reserved, 0K cma-reserved)
Dec 11 07:59:03 np0005555520 kernel: devtmpfs: initialized
Dec 11 07:59:03 np0005555520 kernel: x86/mm: Memory block size: 128MB
Dec 11 07:59:03 np0005555520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 11 07:59:03 np0005555520 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 11 07:59:03 np0005555520 kernel: pinctrl core: initialized pinctrl subsystem
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 11 07:59:03 np0005555520 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 11 07:59:03 np0005555520 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 11 07:59:03 np0005555520 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 11 07:59:03 np0005555520 kernel: audit: initializing netlink subsys (disabled)
Dec 11 07:59:03 np0005555520 kernel: audit: type=2000 audit(1765457939.964:1): state=initialized audit_enabled=0 res=1
Dec 11 07:59:03 np0005555520 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 11 07:59:03 np0005555520 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 11 07:59:03 np0005555520 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 11 07:59:03 np0005555520 kernel: cpuidle: using governor menu
Dec 11 07:59:03 np0005555520 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 11 07:59:03 np0005555520 kernel: PCI: Using configuration type 1 for base access
Dec 11 07:59:03 np0005555520 kernel: PCI: Using configuration type 1 for extended access
Dec 11 07:59:03 np0005555520 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 11 07:59:03 np0005555520 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 11 07:59:03 np0005555520 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 11 07:59:03 np0005555520 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 11 07:59:03 np0005555520 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 11 07:59:03 np0005555520 kernel: Demotion targets for Node 0: null
Dec 11 07:59:03 np0005555520 kernel: cryptd: max_cpu_qlen set to 1000
Dec 11 07:59:03 np0005555520 kernel: ACPI: Added _OSI(Module Device)
Dec 11 07:59:03 np0005555520 kernel: ACPI: Added _OSI(Processor Device)
Dec 11 07:59:03 np0005555520 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 11 07:59:03 np0005555520 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 11 07:59:03 np0005555520 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 11 07:59:03 np0005555520 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 11 07:59:03 np0005555520 kernel: ACPI: Interpreter enabled
Dec 11 07:59:03 np0005555520 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 11 07:59:03 np0005555520 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 11 07:59:03 np0005555520 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 11 07:59:03 np0005555520 kernel: PCI: Using E820 reservations for host bridge windows
Dec 11 07:59:03 np0005555520 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 11 07:59:03 np0005555520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 11 07:59:03 np0005555520 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [3] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [4] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [5] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [6] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [7] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [8] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [9] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [10] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [11] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [12] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [13] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [14] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [15] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [16] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [17] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [18] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [19] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [20] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [21] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [22] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [23] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [24] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [25] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [26] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [27] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [28] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [29] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [30] registered
Dec 11 07:59:03 np0005555520 kernel: acpiphp: Slot [31] registered
Dec 11 07:59:03 np0005555520 kernel: PCI host bridge to bus 0000:00
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 11 07:59:03 np0005555520 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 11 07:59:03 np0005555520 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 11 07:59:03 np0005555520 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 11 07:59:03 np0005555520 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 11 07:59:03 np0005555520 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 11 07:59:03 np0005555520 kernel: iommu: Default domain type: Translated
Dec 11 07:59:03 np0005555520 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 11 07:59:03 np0005555520 kernel: SCSI subsystem initialized
Dec 11 07:59:03 np0005555520 kernel: ACPI: bus type USB registered
Dec 11 07:59:03 np0005555520 kernel: usbcore: registered new interface driver usbfs
Dec 11 07:59:03 np0005555520 kernel: usbcore: registered new interface driver hub
Dec 11 07:59:03 np0005555520 kernel: usbcore: registered new device driver usb
Dec 11 07:59:03 np0005555520 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 11 07:59:03 np0005555520 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 11 07:59:03 np0005555520 kernel: PTP clock support registered
Dec 11 07:59:03 np0005555520 kernel: EDAC MC: Ver: 3.0.0
Dec 11 07:59:03 np0005555520 kernel: NetLabel: Initializing
Dec 11 07:59:03 np0005555520 kernel: NetLabel:  domain hash size = 128
Dec 11 07:59:03 np0005555520 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 11 07:59:03 np0005555520 kernel: NetLabel:  unlabeled traffic allowed by default
Dec 11 07:59:03 np0005555520 kernel: PCI: Using ACPI for IRQ routing
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 11 07:59:03 np0005555520 kernel: vgaarb: loaded
Dec 11 07:59:03 np0005555520 kernel: clocksource: Switched to clocksource kvm-clock
Dec 11 07:59:03 np0005555520 kernel: VFS: Disk quotas dquot_6.6.0
Dec 11 07:59:03 np0005555520 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 11 07:59:03 np0005555520 kernel: pnp: PnP ACPI init
Dec 11 07:59:03 np0005555520 kernel: pnp: PnP ACPI: found 5 devices
Dec 11 07:59:03 np0005555520 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_INET protocol family
Dec 11 07:59:03 np0005555520 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 11 07:59:03 np0005555520 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_XDP protocol family
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 11 07:59:03 np0005555520 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 11 07:59:03 np0005555520 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 11 07:59:03 np0005555520 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75825 usecs
Dec 11 07:59:03 np0005555520 kernel: PCI: CLS 0 bytes, default 64
Dec 11 07:59:03 np0005555520 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 11 07:59:03 np0005555520 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 11 07:59:03 np0005555520 kernel: ACPI: bus type thunderbolt registered
Dec 11 07:59:03 np0005555520 kernel: Trying to unpack rootfs image as initramfs...
Dec 11 07:59:03 np0005555520 kernel: Initialise system trusted keyrings
Dec 11 07:59:03 np0005555520 kernel: Key type blacklist registered
Dec 11 07:59:03 np0005555520 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 11 07:59:03 np0005555520 kernel: zbud: loaded
Dec 11 07:59:03 np0005555520 kernel: integrity: Platform Keyring initialized
Dec 11 07:59:03 np0005555520 kernel: integrity: Machine keyring initialized
Dec 11 07:59:03 np0005555520 kernel: Freeing initrd memory: 87820K
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_ALG protocol family
Dec 11 07:59:03 np0005555520 kernel: xor: automatically using best checksumming function   avx       
Dec 11 07:59:03 np0005555520 kernel: Key type asymmetric registered
Dec 11 07:59:03 np0005555520 kernel: Asymmetric key parser 'x509' registered
Dec 11 07:59:03 np0005555520 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 11 07:59:03 np0005555520 kernel: io scheduler mq-deadline registered
Dec 11 07:59:03 np0005555520 kernel: io scheduler kyber registered
Dec 11 07:59:03 np0005555520 kernel: io scheduler bfq registered
Dec 11 07:59:03 np0005555520 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 11 07:59:03 np0005555520 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 11 07:59:03 np0005555520 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 11 07:59:03 np0005555520 kernel: ACPI: button: Power Button [PWRF]
Dec 11 07:59:03 np0005555520 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 11 07:59:03 np0005555520 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 11 07:59:03 np0005555520 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 11 07:59:03 np0005555520 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 11 07:59:03 np0005555520 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 11 07:59:03 np0005555520 kernel: Non-volatile memory driver v1.3
Dec 11 07:59:03 np0005555520 kernel: rdac: device handler registered
Dec 11 07:59:03 np0005555520 kernel: hp_sw: device handler registered
Dec 11 07:59:03 np0005555520 kernel: emc: device handler registered
Dec 11 07:59:03 np0005555520 kernel: alua: device handler registered
Dec 11 07:59:03 np0005555520 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 11 07:59:03 np0005555520 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 11 07:59:03 np0005555520 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 11 07:59:03 np0005555520 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 11 07:59:03 np0005555520 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 11 07:59:03 np0005555520 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 11 07:59:03 np0005555520 kernel: usb usb1: Product: UHCI Host Controller
Dec 11 07:59:03 np0005555520 kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 11 07:59:03 np0005555520 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 11 07:59:03 np0005555520 kernel: hub 1-0:1.0: USB hub found
Dec 11 07:59:03 np0005555520 kernel: hub 1-0:1.0: 2 ports detected
Dec 11 07:59:03 np0005555520 kernel: usbcore: registered new interface driver usbserial_generic
Dec 11 07:59:03 np0005555520 kernel: usbserial: USB Serial support registered for generic
Dec 11 07:59:03 np0005555520 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 11 07:59:03 np0005555520 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 11 07:59:03 np0005555520 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 11 07:59:03 np0005555520 kernel: mousedev: PS/2 mouse device common for all mice
Dec 11 07:59:03 np0005555520 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 11 07:59:03 np0005555520 kernel: rtc_cmos 00:04: registered as rtc0
Dec 11 07:59:03 np0005555520 kernel: rtc_cmos 00:04: setting system clock to 2025-12-11T12:59:02 UTC (1765457942)
Dec 11 07:59:03 np0005555520 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 11 07:59:03 np0005555520 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 11 07:59:03 np0005555520 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 11 07:59:03 np0005555520 kernel: usbcore: registered new interface driver usbhid
Dec 11 07:59:03 np0005555520 kernel: usbhid: USB HID core driver
Dec 11 07:59:03 np0005555520 kernel: drop_monitor: Initializing network drop monitor service
Dec 11 07:59:03 np0005555520 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 11 07:59:03 np0005555520 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 11 07:59:03 np0005555520 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 11 07:59:03 np0005555520 kernel: Initializing XFRM netlink socket
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_INET6 protocol family
Dec 11 07:59:03 np0005555520 kernel: Segment Routing with IPv6
Dec 11 07:59:03 np0005555520 kernel: NET: Registered PF_PACKET protocol family
Dec 11 07:59:03 np0005555520 kernel: mpls_gso: MPLS GSO support
Dec 11 07:59:03 np0005555520 kernel: IPI shorthand broadcast: enabled
Dec 11 07:59:03 np0005555520 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 11 07:59:03 np0005555520 kernel: AES CTR mode by8 optimization enabled
Dec 11 07:59:03 np0005555520 kernel: sched_clock: Marking stable (3510014727, 160755605)->(3822634660, -151864328)
Dec 11 07:59:03 np0005555520 kernel: registered taskstats version 1
Dec 11 07:59:03 np0005555520 kernel: Loading compiled-in X.509 certificates
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 11 07:59:03 np0005555520 kernel: Demotion targets for Node 0: null
Dec 11 07:59:03 np0005555520 kernel: page_owner is disabled
Dec 11 07:59:03 np0005555520 kernel: Key type .fscrypt registered
Dec 11 07:59:03 np0005555520 kernel: Key type fscrypt-provisioning registered
Dec 11 07:59:03 np0005555520 kernel: Key type big_key registered
Dec 11 07:59:03 np0005555520 kernel: Key type encrypted registered
Dec 11 07:59:03 np0005555520 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 11 07:59:03 np0005555520 kernel: Loading compiled-in module X.509 certificates
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 11 07:59:03 np0005555520 kernel: ima: Allocated hash algorithm: sha256
Dec 11 07:59:03 np0005555520 kernel: ima: No architecture policies found
Dec 11 07:59:03 np0005555520 kernel: evm: Initialising EVM extended attributes:
Dec 11 07:59:03 np0005555520 kernel: evm: security.selinux
Dec 11 07:59:03 np0005555520 kernel: evm: security.SMACK64 (disabled)
Dec 11 07:59:03 np0005555520 kernel: evm: security.SMACK64EXEC (disabled)
Dec 11 07:59:03 np0005555520 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 11 07:59:03 np0005555520 kernel: evm: security.SMACK64MMAP (disabled)
Dec 11 07:59:03 np0005555520 kernel: evm: security.apparmor (disabled)
Dec 11 07:59:03 np0005555520 kernel: evm: security.ima
Dec 11 07:59:03 np0005555520 kernel: evm: security.capability
Dec 11 07:59:03 np0005555520 kernel: evm: HMAC attrs: 0x1
Dec 11 07:59:03 np0005555520 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 11 07:59:03 np0005555520 kernel: Running certificate verification RSA selftest
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 11 07:59:03 np0005555520 kernel: Running certificate verification ECDSA selftest
Dec 11 07:59:03 np0005555520 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 11 07:59:03 np0005555520 kernel: clk: Disabling unused clocks
Dec 11 07:59:03 np0005555520 kernel: Freeing unused decrypted memory: 2028K
Dec 11 07:59:03 np0005555520 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 11 07:59:03 np0005555520 kernel: Write protecting the kernel read-only data: 30720k
Dec 11 07:59:03 np0005555520 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 11 07:59:03 np0005555520 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 11 07:59:03 np0005555520 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 11 07:59:03 np0005555520 kernel: usb 1-1: Product: QEMU USB Tablet
Dec 11 07:59:03 np0005555520 kernel: usb 1-1: Manufacturer: QEMU
Dec 11 07:59:03 np0005555520 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 11 07:59:03 np0005555520 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 11 07:59:03 np0005555520 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 11 07:59:03 np0005555520 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 11 07:59:03 np0005555520 kernel: Run /init as init process
Dec 11 07:59:03 np0005555520 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 11 07:59:03 np0005555520 systemd: Detected virtualization kvm.
Dec 11 07:59:03 np0005555520 systemd: Detected architecture x86-64.
Dec 11 07:59:03 np0005555520 systemd: Running in initrd.
Dec 11 07:59:03 np0005555520 systemd: No hostname configured, using default hostname.
Dec 11 07:59:03 np0005555520 systemd: Hostname set to <localhost>.
Dec 11 07:59:03 np0005555520 systemd: Initializing machine ID from VM UUID.
Dec 11 07:59:03 np0005555520 systemd: Queued start job for default target Initrd Default Target.
Dec 11 07:59:03 np0005555520 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec 11 07:59:03 np0005555520 systemd: Reached target Local Encrypted Volumes.
Dec 11 07:59:03 np0005555520 systemd: Reached target Initrd /usr File System.
Dec 11 07:59:03 np0005555520 systemd: Reached target Local File Systems.
Dec 11 07:59:03 np0005555520 systemd: Reached target Path Units.
Dec 11 07:59:03 np0005555520 systemd: Reached target Slice Units.
Dec 11 07:59:03 np0005555520 systemd: Reached target Swaps.
Dec 11 07:59:03 np0005555520 systemd: Reached target Timer Units.
Dec 11 07:59:03 np0005555520 systemd: Listening on D-Bus System Message Bus Socket.
Dec 11 07:59:03 np0005555520 systemd: Listening on Journal Socket (/dev/log).
Dec 11 07:59:03 np0005555520 systemd: Listening on Journal Socket.
Dec 11 07:59:03 np0005555520 systemd: Listening on udev Control Socket.
Dec 11 07:59:03 np0005555520 systemd: Listening on udev Kernel Socket.
Dec 11 07:59:03 np0005555520 systemd: Reached target Socket Units.
Dec 11 07:59:03 np0005555520 systemd: Starting Create List of Static Device Nodes...
Dec 11 07:59:03 np0005555520 systemd: Starting Journal Service...
Dec 11 07:59:03 np0005555520 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 11 07:59:03 np0005555520 systemd: Starting Apply Kernel Variables...
Dec 11 07:59:03 np0005555520 systemd: Starting Create System Users...
Dec 11 07:59:03 np0005555520 systemd: Starting Setup Virtual Console...
Dec 11 07:59:03 np0005555520 systemd: Finished Create List of Static Device Nodes.
Dec 11 07:59:03 np0005555520 systemd: Finished Apply Kernel Variables.
Dec 11 07:59:03 np0005555520 systemd: Finished Create System Users.
Dec 11 07:59:03 np0005555520 systemd: Starting Create Static Device Nodes in /dev...
Dec 11 07:59:03 np0005555520 systemd-journald[308]: Journal started
Dec 11 07:59:03 np0005555520 systemd-journald[308]: Runtime Journal (/run/log/journal/0785382706134815965041016faf3709) is 8.0M, max 153.6M, 145.6M free.
Dec 11 07:59:03 np0005555520 systemd-sysusers[313]: Creating group 'users' with GID 100.
Dec 11 07:59:03 np0005555520 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Dec 11 07:59:03 np0005555520 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 11 07:59:03 np0005555520 systemd: Started Journal Service.
Dec 11 07:59:03 np0005555520 systemd[1]: Starting Create Volatile Files and Directories...
Dec 11 07:59:03 np0005555520 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 11 07:59:03 np0005555520 systemd[1]: Finished Create Volatile Files and Directories.
Dec 11 07:59:03 np0005555520 systemd[1]: Finished Setup Virtual Console.
Dec 11 07:59:03 np0005555520 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 11 07:59:03 np0005555520 systemd[1]: Starting dracut cmdline hook...
Dec 11 07:59:03 np0005555520 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec 11 07:59:03 np0005555520 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 07:59:03 np0005555520 systemd[1]: Finished dracut cmdline hook.
Dec 11 07:59:03 np0005555520 systemd[1]: Starting dracut pre-udev hook...
Dec 11 07:59:03 np0005555520 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 11 07:59:03 np0005555520 kernel: device-mapper: uevent: version 1.0.3
Dec 11 07:59:03 np0005555520 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 11 07:59:03 np0005555520 kernel: RPC: Registered named UNIX socket transport module.
Dec 11 07:59:03 np0005555520 kernel: RPC: Registered udp transport module.
Dec 11 07:59:03 np0005555520 kernel: RPC: Registered tcp transport module.
Dec 11 07:59:03 np0005555520 kernel: RPC: Registered tcp-with-tls transport module.
Dec 11 07:59:03 np0005555520 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 11 07:59:03 np0005555520 rpc.statd[446]: Version 2.5.4 starting
Dec 11 07:59:03 np0005555520 rpc.statd[446]: Initializing NSM state
Dec 11 07:59:03 np0005555520 rpc.idmapd[451]: Setting log level to 0
Dec 11 07:59:03 np0005555520 systemd[1]: Finished dracut pre-udev hook.
Dec 11 07:59:03 np0005555520 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 11 07:59:03 np0005555520 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Dec 11 07:59:03 np0005555520 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 11 07:59:03 np0005555520 systemd[1]: Starting dracut pre-trigger hook...
Dec 11 07:59:03 np0005555520 systemd[1]: Finished dracut pre-trigger hook.
Dec 11 07:59:03 np0005555520 systemd[1]: Starting Coldplug All udev Devices...
Dec 11 07:59:04 np0005555520 systemd[1]: Created slice Slice /system/modprobe.
Dec 11 07:59:04 np0005555520 systemd[1]: Starting Load Kernel Module configfs...
Dec 11 07:59:04 np0005555520 systemd[1]: Finished Coldplug All udev Devices.
Dec 11 07:59:04 np0005555520 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 07:59:04 np0005555520 systemd[1]: Finished Load Kernel Module configfs.
Dec 11 07:59:04 np0005555520 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target Network.
Dec 11 07:59:04 np0005555520 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 11 07:59:04 np0005555520 systemd[1]: Starting dracut initqueue hook...
Dec 11 07:59:04 np0005555520 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 11 07:59:04 np0005555520 systemd[1]: Mounting Kernel Configuration File System...
Dec 11 07:59:04 np0005555520 systemd[1]: Mounted Kernel Configuration File System.
Dec 11 07:59:04 np0005555520 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 11 07:59:04 np0005555520 kernel: vda: vda1
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target System Initialization.
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target Basic System.
Dec 11 07:59:04 np0005555520 systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 07:59:04 np0005555520 kernel: scsi host0: ata_piix
Dec 11 07:59:04 np0005555520 kernel: scsi host1: ata_piix
Dec 11 07:59:04 np0005555520 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 11 07:59:04 np0005555520 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 11 07:59:04 np0005555520 systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target Initrd Root Device.
Dec 11 07:59:04 np0005555520 kernel: ata1: found unknown device (class 0)
Dec 11 07:59:04 np0005555520 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 11 07:59:04 np0005555520 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 11 07:59:04 np0005555520 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 11 07:59:04 np0005555520 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 11 07:59:04 np0005555520 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 11 07:59:04 np0005555520 systemd[1]: Finished dracut initqueue hook.
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target Remote Encrypted Volumes.
Dec 11 07:59:04 np0005555520 systemd[1]: Reached target Remote File Systems.
Dec 11 07:59:04 np0005555520 systemd[1]: Starting dracut pre-mount hook...
Dec 11 07:59:04 np0005555520 systemd[1]: Finished dracut pre-mount hook.
Dec 11 07:59:04 np0005555520 systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 11 07:59:04 np0005555520 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec 11 07:59:04 np0005555520 systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 11 07:59:04 np0005555520 systemd[1]: Mounting /sysroot...
Dec 11 07:59:05 np0005555520 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 11 07:59:05 np0005555520 kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 11 07:59:05 np0005555520 kernel: XFS (vda1): Ending clean mount
Dec 11 07:59:05 np0005555520 systemd[1]: Mounted /sysroot.
Dec 11 07:59:05 np0005555520 systemd[1]: Reached target Initrd Root File System.
Dec 11 07:59:05 np0005555520 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 11 07:59:05 np0005555520 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 11 07:59:05 np0005555520 systemd[1]: Reached target Initrd File Systems.
Dec 11 07:59:05 np0005555520 systemd[1]: Reached target Initrd Default Target.
Dec 11 07:59:05 np0005555520 systemd[1]: Starting dracut mount hook...
Dec 11 07:59:05 np0005555520 systemd[1]: Finished dracut mount hook.
Dec 11 07:59:05 np0005555520 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 11 07:59:05 np0005555520 rpc.idmapd[451]: exiting on signal 15
Dec 11 07:59:05 np0005555520 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 11 07:59:05 np0005555520 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Network.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Timer Units.
Dec 11 07:59:05 np0005555520 systemd[1]: dbus.socket: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Initrd Default Target.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Basic System.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Initrd Root Device.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Initrd /usr File System.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Path Units.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Remote File Systems.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Slice Units.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Socket Units.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target System Initialization.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Local File Systems.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Swaps.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut mount hook.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut pre-mount hook.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped target Local Encrypted Volumes.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut initqueue hook.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Apply Kernel Variables.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Create Volatile Files and Directories.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Coldplug All udev Devices.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut pre-trigger hook.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Setup Virtual Console.
Dec 11 07:59:05 np0005555520 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 11 07:59:05 np0005555520 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Closed udev Control Socket.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Closed udev Kernel Socket.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut pre-udev hook.
Dec 11 07:59:05 np0005555520 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped dracut cmdline hook.
Dec 11 07:59:05 np0005555520 systemd[1]: Starting Cleanup udev Database...
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 11 07:59:05 np0005555520 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Create List of Static Device Nodes.
Dec 11 07:59:05 np0005555520 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Stopped Create System Users.
Dec 11 07:59:05 np0005555520 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 11 07:59:05 np0005555520 systemd[1]: Finished Cleanup udev Database.
Dec 11 07:59:05 np0005555520 systemd[1]: Reached target Switch Root.
Dec 11 07:59:05 np0005555520 systemd[1]: Starting Switch Root...
Dec 11 07:59:05 np0005555520 systemd[1]: Switching root.
Dec 11 07:59:05 np0005555520 systemd-journald[308]: Journal stopped
Dec 11 07:59:06 np0005555520 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec 11 07:59:06 np0005555520 kernel: audit: type=1404 audit(1765457945.690:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 07:59:06 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 07:59:06 np0005555520 kernel: audit: type=1403 audit(1765457945.835:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 11 07:59:06 np0005555520 systemd: Successfully loaded SELinux policy in 147.574ms.
Dec 11 07:59:06 np0005555520 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.325ms.
Dec 11 07:59:06 np0005555520 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 11 07:59:06 np0005555520 systemd: Detected virtualization kvm.
Dec 11 07:59:06 np0005555520 systemd: Detected architecture x86-64.
Dec 11 07:59:06 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 07:59:06 np0005555520 systemd: initrd-switch-root.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd: Stopped Switch Root.
Dec 11 07:59:06 np0005555520 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 11 07:59:06 np0005555520 systemd: Created slice Slice /system/getty.
Dec 11 07:59:06 np0005555520 systemd: Created slice Slice /system/serial-getty.
Dec 11 07:59:06 np0005555520 systemd: Created slice Slice /system/sshd-keygen.
Dec 11 07:59:06 np0005555520 systemd: Created slice User and Session Slice.
Dec 11 07:59:06 np0005555520 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec 11 07:59:06 np0005555520 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec 11 07:59:06 np0005555520 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 11 07:59:06 np0005555520 systemd: Reached target Local Encrypted Volumes.
Dec 11 07:59:06 np0005555520 systemd: Stopped target Switch Root.
Dec 11 07:59:06 np0005555520 systemd: Stopped target Initrd File Systems.
Dec 11 07:59:06 np0005555520 systemd: Stopped target Initrd Root File System.
Dec 11 07:59:06 np0005555520 systemd: Reached target Local Integrity Protected Volumes.
Dec 11 07:59:06 np0005555520 systemd: Reached target Path Units.
Dec 11 07:59:06 np0005555520 systemd: Reached target rpc_pipefs.target.
Dec 11 07:59:06 np0005555520 systemd: Reached target Slice Units.
Dec 11 07:59:06 np0005555520 systemd: Reached target Swaps.
Dec 11 07:59:06 np0005555520 systemd: Reached target Local Verity Protected Volumes.
Dec 11 07:59:06 np0005555520 systemd: Listening on RPCbind Server Activation Socket.
Dec 11 07:59:06 np0005555520 systemd: Reached target RPC Port Mapper.
Dec 11 07:59:06 np0005555520 systemd: Listening on Process Core Dump Socket.
Dec 11 07:59:06 np0005555520 systemd: Listening on initctl Compatibility Named Pipe.
Dec 11 07:59:06 np0005555520 systemd: Listening on udev Control Socket.
Dec 11 07:59:06 np0005555520 systemd: Listening on udev Kernel Socket.
Dec 11 07:59:06 np0005555520 systemd: Mounting Huge Pages File System...
Dec 11 07:59:06 np0005555520 systemd: Mounting POSIX Message Queue File System...
Dec 11 07:59:06 np0005555520 systemd: Mounting Kernel Debug File System...
Dec 11 07:59:06 np0005555520 systemd: Mounting Kernel Trace File System...
Dec 11 07:59:06 np0005555520 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 11 07:59:06 np0005555520 systemd: Starting Create List of Static Device Nodes...
Dec 11 07:59:06 np0005555520 systemd: Starting Load Kernel Module configfs...
Dec 11 07:59:06 np0005555520 systemd: Starting Load Kernel Module drm...
Dec 11 07:59:06 np0005555520 systemd: Starting Load Kernel Module efi_pstore...
Dec 11 07:59:06 np0005555520 systemd: Starting Load Kernel Module fuse...
Dec 11 07:59:06 np0005555520 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 11 07:59:06 np0005555520 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd: Stopped File System Check on Root Device.
Dec 11 07:59:06 np0005555520 systemd: Stopped Journal Service.
Dec 11 07:59:06 np0005555520 systemd: Starting Journal Service...
Dec 11 07:59:06 np0005555520 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 11 07:59:06 np0005555520 systemd: Starting Generate network units from Kernel command line...
Dec 11 07:59:06 np0005555520 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 07:59:06 np0005555520 kernel: fuse: init (API version 7.37)
Dec 11 07:59:06 np0005555520 systemd: Starting Remount Root and Kernel File Systems...
Dec 11 07:59:06 np0005555520 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 11 07:59:06 np0005555520 systemd: Starting Apply Kernel Variables...
Dec 11 07:59:06 np0005555520 systemd: Starting Coldplug All udev Devices...
Dec 11 07:59:06 np0005555520 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 11 07:59:06 np0005555520 systemd: Mounted Huge Pages File System.
Dec 11 07:59:06 np0005555520 systemd: Mounted POSIX Message Queue File System.
Dec 11 07:59:06 np0005555520 systemd: Mounted Kernel Debug File System.
Dec 11 07:59:06 np0005555520 systemd-journald[678]: Journal started
Dec 11 07:59:06 np0005555520 systemd-journald[678]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 11 07:59:06 np0005555520 systemd[1]: Queued start job for default target Multi-User System.
Dec 11 07:59:06 np0005555520 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd: Started Journal Service.
Dec 11 07:59:06 np0005555520 systemd[1]: Mounted Kernel Trace File System.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Create List of Static Device Nodes.
Dec 11 07:59:06 np0005555520 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Load Kernel Module configfs.
Dec 11 07:59:06 np0005555520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 11 07:59:06 np0005555520 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Load Kernel Module fuse.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Generate network units from Kernel command line.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Apply Kernel Variables.
Dec 11 07:59:06 np0005555520 kernel: ACPI: bus type drm_connector registered
Dec 11 07:59:06 np0005555520 systemd[1]: Mounting FUSE Control File System...
Dec 11 07:59:06 np0005555520 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Rebuild Hardware Database...
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 11 07:59:06 np0005555520 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Load/Save OS Random Seed...
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Create System Users...
Dec 11 07:59:06 np0005555520 systemd-journald[678]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 11 07:59:06 np0005555520 systemd-journald[678]: Received client request to flush runtime journal.
Dec 11 07:59:06 np0005555520 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Load Kernel Module drm.
Dec 11 07:59:06 np0005555520 systemd[1]: Mounted FUSE Control File System.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Load/Save OS Random Seed.
Dec 11 07:59:06 np0005555520 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Create System Users.
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Coldplug All udev Devices.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 11 07:59:06 np0005555520 systemd[1]: Reached target Preparation for Local File Systems.
Dec 11 07:59:06 np0005555520 systemd[1]: Reached target Local File Systems.
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 11 07:59:06 np0005555520 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 11 07:59:06 np0005555520 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 11 07:59:06 np0005555520 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Automatic Boot Loader Update...
Dec 11 07:59:06 np0005555520 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Create Volatile Files and Directories...
Dec 11 07:59:06 np0005555520 bootctl[696]: Couldn't find EFI system partition, skipping.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Automatic Boot Loader Update.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Create Volatile Files and Directories.
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Security Auditing Service...
Dec 11 07:59:06 np0005555520 systemd[1]: Starting RPC Bind...
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Rebuild Journal Catalog...
Dec 11 07:59:06 np0005555520 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 11 07:59:06 np0005555520 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Rebuild Journal Catalog.
Dec 11 07:59:06 np0005555520 systemd[1]: Started RPC Bind.
Dec 11 07:59:06 np0005555520 augenrules[709]: /sbin/augenrules: No change
Dec 11 07:59:06 np0005555520 augenrules[724]: No rules
Dec 11 07:59:06 np0005555520 augenrules[724]: enabled 1
Dec 11 07:59:06 np0005555520 augenrules[724]: failure 1
Dec 11 07:59:06 np0005555520 augenrules[724]: pid 704
Dec 11 07:59:06 np0005555520 augenrules[724]: rate_limit 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_limit 8192
Dec 11 07:59:06 np0005555520 augenrules[724]: lost 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog 3
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_wait_time 60000
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_wait_time_actual 0
Dec 11 07:59:06 np0005555520 augenrules[724]: enabled 1
Dec 11 07:59:06 np0005555520 augenrules[724]: failure 1
Dec 11 07:59:06 np0005555520 augenrules[724]: pid 704
Dec 11 07:59:06 np0005555520 augenrules[724]: rate_limit 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_limit 8192
Dec 11 07:59:06 np0005555520 augenrules[724]: lost 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_wait_time 60000
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_wait_time_actual 0
Dec 11 07:59:06 np0005555520 augenrules[724]: enabled 1
Dec 11 07:59:06 np0005555520 augenrules[724]: failure 1
Dec 11 07:59:06 np0005555520 augenrules[724]: pid 704
Dec 11 07:59:06 np0005555520 augenrules[724]: rate_limit 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_limit 8192
Dec 11 07:59:06 np0005555520 augenrules[724]: lost 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog 0
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_wait_time 60000
Dec 11 07:59:06 np0005555520 augenrules[724]: backlog_wait_time_actual 0
Dec 11 07:59:06 np0005555520 systemd[1]: Started Security Auditing Service.
Dec 11 07:59:06 np0005555520 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 11 07:59:06 np0005555520 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 11 07:59:07 np0005555520 systemd[1]: Finished Rebuild Hardware Database.
Dec 11 07:59:07 np0005555520 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 11 07:59:07 np0005555520 systemd[1]: Starting Update is Completed...
Dec 11 07:59:07 np0005555520 systemd[1]: Finished Update is Completed.
Dec 11 07:59:07 np0005555520 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Dec 11 07:59:07 np0005555520 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 11 07:59:07 np0005555520 systemd[1]: Reached target System Initialization.
Dec 11 07:59:07 np0005555520 systemd[1]: Started dnf makecache --timer.
Dec 11 07:59:07 np0005555520 systemd[1]: Started Daily rotation of log files.
Dec 11 07:59:07 np0005555520 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 11 07:59:07 np0005555520 systemd[1]: Reached target Timer Units.
Dec 11 07:59:07 np0005555520 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 11 07:59:07 np0005555520 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 11 07:59:07 np0005555520 systemd[1]: Reached target Socket Units.
Dec 11 07:59:07 np0005555520 systemd[1]: Starting D-Bus System Message Bus...
Dec 11 07:59:07 np0005555520 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 07:59:07 np0005555520 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 11 07:59:07 np0005555520 systemd[1]: Starting Load Kernel Module configfs...
Dec 11 07:59:07 np0005555520 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 07:59:07 np0005555520 systemd[1]: Finished Load Kernel Module configfs.
Dec 11 07:59:07 np0005555520 systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 07:59:07 np0005555520 systemd[1]: Started D-Bus System Message Bus.
Dec 11 07:59:07 np0005555520 systemd[1]: Reached target Basic System.
Dec 11 07:59:07 np0005555520 systemd[1]: Starting NTP client/server...
Dec 11 07:59:07 np0005555520 dbus-broker-lau[752]: Ready
Dec 11 07:59:07 np0005555520 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 11 07:59:07 np0005555520 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 11 07:59:07 np0005555520 systemd[1]: Starting IPv4 firewall with iptables...
Dec 11 07:59:07 np0005555520 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 11 07:59:07 np0005555520 systemd[1]: Started irqbalance daemon.
Dec 11 07:59:07 np0005555520 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 11 07:59:07 np0005555520 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 07:59:07 np0005555520 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 07:59:07 np0005555520 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 07:59:07 np0005555520 systemd[1]: Reached target sshd-keygen.target.
Dec 11 07:59:07 np0005555520 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 11 07:59:07 np0005555520 systemd[1]: Reached target User and Group Name Lookups.
Dec 11 07:59:07 np0005555520 systemd[1]: Starting User Login Management...
Dec 11 07:59:07 np0005555520 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 11 07:59:07 np0005555520 chronyd[790]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 11 07:59:07 np0005555520 chronyd[790]: Loaded 0 symmetric keys
Dec 11 07:59:07 np0005555520 chronyd[790]: Using right/UTC timezone to obtain leap second data
Dec 11 07:59:07 np0005555520 chronyd[790]: Loaded seccomp filter (level 2)
Dec 11 07:59:07 np0005555520 systemd[1]: Started NTP client/server.
Dec 11 07:59:07 np0005555520 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 11 07:59:07 np0005555520 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 11 07:59:07 np0005555520 systemd-logind[786]: New seat seat0.
Dec 11 07:59:07 np0005555520 systemd[1]: Started User Login Management.
Dec 11 07:59:07 np0005555520 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 11 07:59:07 np0005555520 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 11 07:59:07 np0005555520 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 11 07:59:08 np0005555520 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 11 07:59:08 np0005555520 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 11 07:59:08 np0005555520 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 11 07:59:08 np0005555520 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 11 07:59:08 np0005555520 kernel: Console: switching to colour dummy device 80x25
Dec 11 07:59:08 np0005555520 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 11 07:59:08 np0005555520 kernel: [drm] features: -context_init
Dec 11 07:59:08 np0005555520 kernel: [drm] number of scanouts: 1
Dec 11 07:59:08 np0005555520 kernel: [drm] number of cap sets: 0
Dec 11 07:59:08 np0005555520 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 11 07:59:08 np0005555520 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 11 07:59:08 np0005555520 kernel: Console: switching to colour frame buffer device 128x48
Dec 11 07:59:08 np0005555520 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 11 07:59:08 np0005555520 kernel: kvm_amd: TSC scaling supported
Dec 11 07:59:08 np0005555520 kernel: kvm_amd: Nested Virtualization enabled
Dec 11 07:59:08 np0005555520 kernel: kvm_amd: Nested Paging enabled
Dec 11 07:59:08 np0005555520 kernel: kvm_amd: LBR virtualization supported
Dec 11 07:59:08 np0005555520 cloud-init[814]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 11 Dec 2025 12:59:08 +0000. Up 9.12 seconds.
Dec 11 07:59:08 np0005555520 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Dec 11 07:59:08 np0005555520 systemd[1]: Finished IPv4 firewall with iptables.
Dec 11 07:59:08 np0005555520 systemd[1]: run-cloud\x2dinit-tmp-tmppa8ou317.mount: Deactivated successfully.
Dec 11 07:59:08 np0005555520 systemd[1]: Starting Hostname Service...
Dec 11 07:59:08 np0005555520 systemd[1]: Started Hostname Service.
Dec 11 07:59:08 np0005555520 systemd-hostnamed[856]: Hostname set to <np0005555520.novalocal> (static)
Dec 11 07:59:08 np0005555520 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 11 07:59:08 np0005555520 systemd[1]: Reached target Preparation for Network.
Dec 11 07:59:08 np0005555520 systemd[1]: Starting Network Manager...
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8487] NetworkManager (version 1.54.2-1.el9) is starting... (boot:e322ae57-3e2e-454a-a9ae-b6dc8afa14c6)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8491] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8555] manager[0x5597a1c8f000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8589] hostname: hostname: using hostnamed
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8589] hostname: static hostname changed from (none) to "np0005555520.novalocal"
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8592] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8719] manager[0x5597a1c8f000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8720] manager[0x5597a1c8f000]: rfkill: WWAN hardware radio set enabled
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8756] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8756] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8757] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8757] manager: Networking is enabled by state file
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8759] settings: Loaded settings plugin: keyfile (internal)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8766] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8783] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8795] dhcp: init: Using DHCP client 'internal'
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8798] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8808] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 07:59:08 np0005555520 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8817] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8823] device (lo): Activation: starting connection 'lo' (36a70f10-2cec-4899-9bbf-682d0ec4233b)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8836] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8839] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8885] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8889] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8893] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8894] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8896] device (eth0): carrier: link connected
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8898] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8903] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8908] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8912] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8913] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8914] manager: NetworkManager state is now CONNECTING
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8916] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8922] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8925] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8958] dhcp4 (eth0): state changed new lease, address=38.129.56.119
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8966] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.8988] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 07:59:08 np0005555520 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 07:59:08 np0005555520 systemd[1]: Started Network Manager.
Dec 11 07:59:08 np0005555520 systemd[1]: Reached target Network.
Dec 11 07:59:08 np0005555520 systemd[1]: Starting Network Manager Wait Online...
Dec 11 07:59:08 np0005555520 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 11 07:59:08 np0005555520 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9223] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9226] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9232] device (lo): Activation: successful, device activated.
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9240] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9241] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9253] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9256] device (eth0): Activation: successful, device activated.
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9263] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 07:59:08 np0005555520 NetworkManager[860]: <info>  [1765457948.9266] manager: startup complete
Dec 11 07:59:08 np0005555520 systemd[1]: Started GSSAPI Proxy Daemon.
Dec 11 07:59:08 np0005555520 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 11 07:59:08 np0005555520 systemd[1]: Reached target NFS client services.
Dec 11 07:59:08 np0005555520 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 11 07:59:08 np0005555520 systemd[1]: Reached target Remote File Systems.
Dec 11 07:59:08 np0005555520 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 07:59:08 np0005555520 systemd[1]: Finished Network Manager Wait Online.
Dec 11 07:59:08 np0005555520 systemd[1]: Starting Cloud-init: Network Stage...
Dec 11 07:59:09 np0005555520 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 11 Dec 2025 12:59:09 +0000. Up 10.33 seconds.
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |  eth0  | True |        38.129.56.119         | 255.255.255.0 | global | fa:16:3e:85:e7:35 |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe85:e735/64 |       .       |  link  | fa:16:3e:85:e7:35 |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 11 07:59:09 np0005555520 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 07:59:10 np0005555520 cloud-init[924]: Generating public/private rsa key pair.
Dec 11 07:59:10 np0005555520 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 11 07:59:10 np0005555520 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 11 07:59:10 np0005555520 cloud-init[924]: The key fingerprint is:
Dec 11 07:59:10 np0005555520 cloud-init[924]: SHA256:1zhVWynTUkrKhXfgqDvoWU6o+1ppwhh5uF8VWOgTnC8 root@np0005555520.novalocal
Dec 11 07:59:10 np0005555520 cloud-init[924]: The key's randomart image is:
Dec 11 07:59:10 np0005555520 cloud-init[924]: +---[RSA 3072]----+
Dec 11 07:59:10 np0005555520 cloud-init[924]: |      . o.  .+=.o|
Dec 11 07:59:10 np0005555520 cloud-init[924]: |       =o ..**.* |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |      ..o. +oo*  |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |    o  E .o+     |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |   + .  So+ .    |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |    *   =...     |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |   o o B =       |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |    . B = .      |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |     =++ .       |
Dec 11 07:59:10 np0005555520 cloud-init[924]: +----[SHA256]-----+
Dec 11 07:59:10 np0005555520 cloud-init[924]: Generating public/private ecdsa key pair.
Dec 11 07:59:10 np0005555520 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 11 07:59:10 np0005555520 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 11 07:59:10 np0005555520 cloud-init[924]: The key fingerprint is:
Dec 11 07:59:10 np0005555520 cloud-init[924]: SHA256:1rZ6fd7kk1v19P4Iz0yjKe2AmNdS+/brP8zy2/08P6M root@np0005555520.novalocal
Dec 11 07:59:10 np0005555520 cloud-init[924]: The key's randomart image is:
Dec 11 07:59:10 np0005555520 cloud-init[924]: +---[ECDSA 256]---+
Dec 11 07:59:10 np0005555520 cloud-init[924]: |                 |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |                 |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |                 |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |         .       |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |        S +     o|
Dec 11 07:59:10 np0005555520 cloud-init[924]: |       + = o   .+|
Dec 11 07:59:10 np0005555520 cloud-init[924]: |      o + =o. = *|
Dec 11 07:59:10 np0005555520 cloud-init[924]: |       . ooo+X.^+|
Dec 11 07:59:10 np0005555520 cloud-init[924]: |        .. +=E&*^|
Dec 11 07:59:10 np0005555520 cloud-init[924]: +----[SHA256]-----+
Dec 11 07:59:10 np0005555520 cloud-init[924]: Generating public/private ed25519 key pair.
Dec 11 07:59:10 np0005555520 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 11 07:59:10 np0005555520 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 11 07:59:10 np0005555520 cloud-init[924]: The key fingerprint is:
Dec 11 07:59:10 np0005555520 cloud-init[924]: SHA256:CEP4VJbaY8RNdES0Hp7oL1caIdSdpkOlQ5hNmSnGNw8 root@np0005555520.novalocal
Dec 11 07:59:10 np0005555520 cloud-init[924]: The key's randomart image is:
Dec 11 07:59:10 np0005555520 cloud-init[924]: +--[ED25519 256]--+
Dec 11 07:59:10 np0005555520 cloud-init[924]: |   ..oo*oXB*..   |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |  ....+ OoEo+    |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |   oo+ o +=*     |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |    oo+..+++.    |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |     ...S.+o     |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |       .  . .    |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |        .  +     |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |        ..o      |
Dec 11 07:59:10 np0005555520 cloud-init[924]: |         o.      |
Dec 11 07:59:10 np0005555520 cloud-init[924]: +----[SHA256]-----+
Dec 11 07:59:10 np0005555520 systemd[1]: Finished Cloud-init: Network Stage.
Dec 11 07:59:10 np0005555520 systemd[1]: Reached target Cloud-config availability.
Dec 11 07:59:10 np0005555520 systemd[1]: Reached target Network is Online.
Dec 11 07:59:10 np0005555520 systemd[1]: Starting Cloud-init: Config Stage...
Dec 11 07:59:10 np0005555520 systemd[1]: Starting Crash recovery kernel arming...
Dec 11 07:59:10 np0005555520 systemd[1]: Starting Notify NFS peers of a restart...
Dec 11 07:59:10 np0005555520 systemd[1]: Starting System Logging Service...
Dec 11 07:59:10 np0005555520 systemd[1]: Starting OpenSSH server daemon...
Dec 11 07:59:10 np0005555520 systemd[1]: Starting Permit User Sessions...
Dec 11 07:59:10 np0005555520 sm-notify[1006]: Version 2.5.4 starting
Dec 11 07:59:10 np0005555520 systemd[1]: Started Notify NFS peers of a restart.
Dec 11 07:59:10 np0005555520 systemd[1]: Finished Permit User Sessions.
Dec 11 07:59:10 np0005555520 systemd[1]: Started Command Scheduler.
Dec 11 07:59:10 np0005555520 systemd[1]: Started Getty on tty1.
Dec 11 07:59:10 np0005555520 systemd[1]: Started Serial Getty on ttyS0.
Dec 11 07:59:11 np0005555520 systemd[1]: Reached target Login Prompts.
Dec 11 07:59:11 np0005555520 systemd[1]: Started OpenSSH server daemon.
Dec 11 07:59:11 np0005555520 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Dec 11 07:59:11 np0005555520 systemd[1]: Started System Logging Service.
Dec 11 07:59:11 np0005555520 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 11 07:59:11 np0005555520 systemd[1]: Reached target Multi-User System.
Dec 11 07:59:11 np0005555520 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 11 07:59:11 np0005555520 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 11 07:59:11 np0005555520 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 11 07:59:11 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 07:59:11 np0005555520 kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Dec 11 07:59:11 np0005555520 kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 11 07:59:11 np0005555520 cloud-init[1138]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 11 Dec 2025 12:59:11 +0000. Up 12.18 seconds.
Dec 11 07:59:11 np0005555520 systemd[1]: Finished Cloud-init: Config Stage.
Dec 11 07:59:11 np0005555520 systemd[1]: Starting Cloud-init: Final Stage...
Dec 11 07:59:11 np0005555520 dracut[1285]: dracut-057-102.git20250818.el9
Dec 11 07:59:11 np0005555520 cloud-init[1310]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 11 Dec 2025 12:59:11 +0000. Up 12.59 seconds.
Dec 11 07:59:11 np0005555520 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 11 07:59:11 np0005555520 cloud-init[1351]: #############################################################
Dec 11 07:59:11 np0005555520 cloud-init[1354]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 11 07:59:11 np0005555520 cloud-init[1360]: 256 SHA256:1rZ6fd7kk1v19P4Iz0yjKe2AmNdS+/brP8zy2/08P6M root@np0005555520.novalocal (ECDSA)
Dec 11 07:59:11 np0005555520 cloud-init[1362]: 256 SHA256:CEP4VJbaY8RNdES0Hp7oL1caIdSdpkOlQ5hNmSnGNw8 root@np0005555520.novalocal (ED25519)
Dec 11 07:59:11 np0005555520 cloud-init[1364]: 3072 SHA256:1zhVWynTUkrKhXfgqDvoWU6o+1ppwhh5uF8VWOgTnC8 root@np0005555520.novalocal (RSA)
Dec 11 07:59:11 np0005555520 cloud-init[1365]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 11 07:59:11 np0005555520 cloud-init[1366]: #############################################################
Dec 11 07:59:11 np0005555520 cloud-init[1310]: Cloud-init v. 24.4-7.el9 finished at Thu, 11 Dec 2025 12:59:11 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.77 seconds
Dec 11 07:59:11 np0005555520 systemd[1]: Finished Cloud-init: Final Stage.
Dec 11 07:59:11 np0005555520 systemd[1]: Reached target Cloud-init target.
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: memstrack is not available
Dec 11 07:59:12 np0005555520 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 11 07:59:12 np0005555520 dracut[1287]: memstrack is not available
Dec 11 07:59:12 np0005555520 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 11 07:59:13 np0005555520 dracut[1287]: *** Including module: systemd ***
Dec 11 07:59:13 np0005555520 dracut[1287]: *** Including module: fips ***
Dec 11 07:59:13 np0005555520 dracut[1287]: *** Including module: systemd-initrd ***
Dec 11 07:59:13 np0005555520 dracut[1287]: *** Including module: i18n ***
Dec 11 07:59:13 np0005555520 dracut[1287]: *** Including module: drm ***
Dec 11 07:59:13 np0005555520 chronyd[790]: Selected source 158.69.193.108 (2.centos.pool.ntp.org)
Dec 11 07:59:13 np0005555520 chronyd[790]: System clock TAI offset set to 37 seconds
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: prefixdevname ***
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: kernel-modules ***
Dec 11 07:59:14 np0005555520 kernel: block vda: the capability attribute has been deprecated.
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: kernel-modules-extra ***
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: qemu ***
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: fstab-sys ***
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: rootfs-block ***
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: terminfo ***
Dec 11 07:59:14 np0005555520 dracut[1287]: *** Including module: udev-rules ***
Dec 11 07:59:15 np0005555520 dracut[1287]: Skipping udev rule: 91-permissions.rules
Dec 11 07:59:15 np0005555520 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 11 07:59:15 np0005555520 dracut[1287]: *** Including module: virtiofs ***
Dec 11 07:59:15 np0005555520 dracut[1287]: *** Including module: dracut-systemd ***
Dec 11 07:59:15 np0005555520 dracut[1287]: *** Including module: usrmount ***
Dec 11 07:59:15 np0005555520 dracut[1287]: *** Including module: base ***
Dec 11 07:59:15 np0005555520 dracut[1287]: *** Including module: fs-lib ***
Dec 11 07:59:15 np0005555520 dracut[1287]: *** Including module: kdumpbase ***
Dec 11 07:59:16 np0005555520 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 11 07:59:16 np0005555520 dracut[1287]:  microcode_ctl module: mangling fw_dir
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 11 07:59:16 np0005555520 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 11 07:59:16 np0005555520 dracut[1287]: *** Including module: openssl ***
Dec 11 07:59:16 np0005555520 dracut[1287]: *** Including module: shutdown ***
Dec 11 07:59:16 np0005555520 dracut[1287]: *** Including module: squash ***
Dec 11 07:59:16 np0005555520 dracut[1287]: *** Including modules done ***
Dec 11 07:59:16 np0005555520 dracut[1287]: *** Installing kernel module dependencies ***
Dec 11 07:59:17 np0005555520 dracut[1287]: *** Installing kernel module dependencies done ***
Dec 11 07:59:17 np0005555520 dracut[1287]: *** Resolving executable dependencies ***
Dec 11 07:59:18 np0005555520 irqbalance[781]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 11 07:59:18 np0005555520 irqbalance[781]: IRQ 25 affinity is now unmanaged
Dec 11 07:59:18 np0005555520 irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 11 07:59:18 np0005555520 irqbalance[781]: IRQ 31 affinity is now unmanaged
Dec 11 07:59:18 np0005555520 irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 11 07:59:18 np0005555520 irqbalance[781]: IRQ 28 affinity is now unmanaged
Dec 11 07:59:18 np0005555520 irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 11 07:59:18 np0005555520 irqbalance[781]: IRQ 32 affinity is now unmanaged
Dec 11 07:59:18 np0005555520 irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 11 07:59:18 np0005555520 irqbalance[781]: IRQ 30 affinity is now unmanaged
Dec 11 07:59:18 np0005555520 irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 11 07:59:18 np0005555520 irqbalance[781]: IRQ 29 affinity is now unmanaged
Dec 11 07:59:19 np0005555520 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 07:59:19 np0005555520 dracut[1287]: *** Resolving executable dependencies done ***
Dec 11 07:59:19 np0005555520 dracut[1287]: *** Generating early-microcode cpio image ***
Dec 11 07:59:19 np0005555520 dracut[1287]: *** Store current command line parameters ***
Dec 11 07:59:19 np0005555520 dracut[1287]: Stored kernel commandline:
Dec 11 07:59:19 np0005555520 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Dec 11 07:59:19 np0005555520 dracut[1287]: *** Install squash loader ***
Dec 11 07:59:20 np0005555520 dracut[1287]: *** Squashing the files inside the initramfs ***
Dec 11 07:59:21 np0005555520 dracut[1287]: *** Squashing the files inside the initramfs done ***
Dec 11 07:59:21 np0005555520 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 11 07:59:21 np0005555520 dracut[1287]: *** Hardlinking files ***
Dec 11 07:59:21 np0005555520 dracut[1287]: *** Hardlinking files done ***
Dec 11 07:59:21 np0005555520 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 11 07:59:22 np0005555520 kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Dec 11 07:59:22 np0005555520 kdumpctl[1020]: kdump: Starting kdump: [OK]
Dec 11 07:59:22 np0005555520 systemd[1]: Finished Crash recovery kernel arming.
Dec 11 07:59:22 np0005555520 systemd[1]: Startup finished in 3.873s (kernel) + 2.762s (initrd) + 16.553s (userspace) = 23.189s.
Dec 11 07:59:38 np0005555520 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:00:27 np0005555520 systemd[1]: Created slice User Slice of UID 1000.
Dec 11 08:00:27 np0005555520 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 11 08:00:27 np0005555520 systemd-logind[786]: New session 1 of user zuul.
Dec 11 08:00:27 np0005555520 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 11 08:00:27 np0005555520 systemd[1]: Starting User Manager for UID 1000...
Dec 11 08:00:27 np0005555520 systemd[4304]: Queued start job for default target Main User Target.
Dec 11 08:00:27 np0005555520 systemd[4304]: Created slice User Application Slice.
Dec 11 08:00:27 np0005555520 systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 08:00:27 np0005555520 systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 08:00:27 np0005555520 systemd[4304]: Reached target Paths.
Dec 11 08:00:27 np0005555520 systemd[4304]: Reached target Timers.
Dec 11 08:00:27 np0005555520 systemd[4304]: Starting D-Bus User Message Bus Socket...
Dec 11 08:00:27 np0005555520 systemd[4304]: Starting Create User's Volatile Files and Directories...
Dec 11 08:00:27 np0005555520 systemd[4304]: Listening on D-Bus User Message Bus Socket.
Dec 11 08:00:27 np0005555520 systemd[4304]: Reached target Sockets.
Dec 11 08:00:27 np0005555520 systemd[4304]: Finished Create User's Volatile Files and Directories.
Dec 11 08:00:27 np0005555520 systemd[4304]: Reached target Basic System.
Dec 11 08:00:27 np0005555520 systemd[4304]: Reached target Main User Target.
Dec 11 08:00:27 np0005555520 systemd[4304]: Startup finished in 117ms.
Dec 11 08:00:27 np0005555520 systemd[1]: Started User Manager for UID 1000.
Dec 11 08:00:27 np0005555520 systemd[1]: Started Session 1 of User zuul.
Dec 11 08:00:27 np0005555520 python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:00:30 np0005555520 python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:00:36 np0005555520 python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:00:37 np0005555520 python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 11 08:00:39 np0005555520 python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEeXZ2Ois5WZBtCySWXv84EXvXOi5igcF6b0gUd6MeGxy/zbqUUmHBy9lhr69iWaAzIaE0GkRuSzDzf57fquX5VpWeamV1e3T5cDsOM4pAOBd9JpSXKo0EhER+tUSuIQshO7Ehdr4tOcEonnLnKm37rgY8Kk0aL78oVOkBo9qwb6puPYuHW0ZnzC+4lyGnJEbGbfKi0yJ41ea2mT7W247iWeW85Fhn4Eh5xQDDUKqqqo+1eWYnDTN0ZlUirs0V151UBQCVFy3IZwO+9oncrGahKhUXV3O5xSrAKBW+8AwuBaFmcnO7v1+ASW1xsAf4WU+M44+ZuZLj5e37Px6d27ZGX1WYdKvjTtVd0q3matidYjsrhERLVygrBNHs1xqVk3tEzpm3eOszTIxU/E2jX4wSkmT4O4Bwr4mo3PzDcWYrl+C0JvftntZGLUMlm1DxSzagitOtMXuxTaxQpF95r/THGj2VOgcWNGkEhMfUvx7yzQTRvJmUw414tpMKyj+OTac= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:39 np0005555520 python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:39 np0005555520 python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:00:40 np0005555520 python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765458039.5125968-207-146153745866539/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0e2e222389c8473e962a2b4fac18ea0d_id_rsa follow=False checksum=4de4cd87ad394f890cd13229f0f9154c49184b4f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:40 np0005555520 python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:00:41 np0005555520 python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765458040.3815074-240-65917723252594/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0e2e222389c8473e962a2b4fac18ea0d_id_rsa.pub follow=False checksum=d175f3a86d2f5dae1db1d65f7ef71ca7aa63a89a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:42 np0005555520 python3[4974]: ansible-ping Invoked with data=pong
Dec 11 08:00:43 np0005555520 python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:00:45 np0005555520 python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 11 08:00:45 np0005555520 python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:46 np0005555520 python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:46 np0005555520 python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:46 np0005555520 python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:46 np0005555520 python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:47 np0005555520 python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:48 np0005555520 python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:49 np0005555520 python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:00:49 np0005555520 python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765458048.8819106-21-70900445839410/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:00:50 np0005555520 python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:50 np0005555520 python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:50 np0005555520 python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:51 np0005555520 python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:51 np0005555520 python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:51 np0005555520 python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:52 np0005555520 python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:52 np0005555520 python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:52 np0005555520 python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:52 np0005555520 python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:53 np0005555520 python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:53 np0005555520 python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:53 np0005555520 python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:54 np0005555520 python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:54 np0005555520 python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:54 np0005555520 python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:54 np0005555520 python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:55 np0005555520 python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:55 np0005555520 python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:55 np0005555520 python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:56 np0005555520 python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:56 np0005555520 python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:56 np0005555520 python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:56 np0005555520 python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:57 np0005555520 python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:00:57 np0005555520 python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:01:00 np0005555520 python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 11 08:01:00 np0005555520 systemd[1]: Starting Time & Date Service...
Dec 11 08:01:00 np0005555520 systemd[1]: Started Time & Date Service.
Dec 11 08:01:00 np0005555520 systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Dec 11 08:01:00 np0005555520 python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:01 np0005555520 python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:01:01 np0005555520 python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765458060.7403152-153-4842228974985/source _original_basename=tmpafwsvlzp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:01 np0005555520 python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:01:02 np0005555520 python3[6423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765458061.6140368-183-67935939282220/source _original_basename=tmpb825k6b2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:03 np0005555520 python3[6525]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:01:03 np0005555520 python3[6598]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765458062.7604342-231-152965105144906/source _original_basename=tmpa_ew32p0 follow=False checksum=5bcad924ade0a9c5bf475bd12ba33bf4bd488ef0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:04 np0005555520 python3[6646]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:01:04 np0005555520 python3[6672]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:01:04 np0005555520 python3[6752]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:01:05 np0005555520 python3[6825]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765458064.4221916-273-46715085125191/source _original_basename=tmp9sn_zvq4 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:05 np0005555520 python3[6876]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-456b-16e8-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:01:06 np0005555520 python3[6904]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-456b-16e8-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 11 08:01:07 np0005555520 python3[6932]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:27 np0005555520 python3[6958]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:01:30 np0005555520 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 11 08:02:02 np0005555520 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 11 08:02:02 np0005555520 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.2833] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:02:02 np0005555520 systemd-udevd[6962]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.2982] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3008] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3010] device (eth1): carrier: link connected
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3011] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3016] policy: auto-activating connection 'Wired connection 1' (b349ef0a-2e86-3046-8ae1-ba68f6decadc)
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3019] device (eth1): Activation: starting connection 'Wired connection 1' (b349ef0a-2e86-3046-8ae1-ba68f6decadc)
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3020] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3022] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3025] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:02:02 np0005555520 NetworkManager[860]: <info>  [1765458122.3029] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:02:03 np0005555520 python3[6988]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-4432-e290-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:02:10 np0005555520 python3[7068]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:02:10 np0005555520 python3[7141]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765458129.737486-102-244350986160198/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=a368eac2ce0523b0306a45166cae1c0568fb8af2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:02:11 np0005555520 python3[7191]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:02:11 np0005555520 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 11 08:02:11 np0005555520 systemd[1]: Stopped Network Manager Wait Online.
Dec 11 08:02:11 np0005555520 systemd[1]: Stopping Network Manager Wait Online...
Dec 11 08:02:11 np0005555520 systemd[1]: Stopping Network Manager...
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2609] caught SIGTERM, shutting down normally.
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2619] dhcp4 (eth0): canceled DHCP transaction
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2619] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2619] dhcp4 (eth0): state changed no lease
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2622] manager: NetworkManager state is now CONNECTING
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2768] dhcp4 (eth1): canceled DHCP transaction
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2768] dhcp4 (eth1): state changed no lease
Dec 11 08:02:11 np0005555520 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:02:11 np0005555520 NetworkManager[860]: <info>  [1765458131.2831] exiting (success)
Dec 11 08:02:11 np0005555520 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:02:11 np0005555520 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 11 08:02:11 np0005555520 systemd[1]: Stopped Network Manager.
Dec 11 08:02:11 np0005555520 systemd[1]: NetworkManager.service: Consumed 1.218s CPU time, 10.1M memory peak.
Dec 11 08:02:11 np0005555520 systemd[1]: Starting Network Manager...
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.3416] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:e322ae57-3e2e-454a-a9ae-b6dc8afa14c6)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.3418] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.3464] manager[0x555b95092000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 08:02:11 np0005555520 systemd[1]: Starting Hostname Service...
Dec 11 08:02:11 np0005555520 systemd[1]: Started Hostname Service.
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4428] hostname: hostname: using hostnamed
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4429] hostname: static hostname changed from (none) to "np0005555520.novalocal"
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4433] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4438] manager[0x555b95092000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4439] manager[0x555b95092000]: rfkill: WWAN hardware radio set enabled
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4462] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4462] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4463] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4463] manager: Networking is enabled by state file
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4465] settings: Loaded settings plugin: keyfile (internal)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4469] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4491] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4500] dhcp: init: Using DHCP client 'internal'
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4502] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4506] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4510] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4516] device (lo): Activation: starting connection 'lo' (36a70f10-2cec-4899-9bbf-682d0ec4233b)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4522] device (eth0): carrier: link connected
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4525] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4529] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4529] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4534] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4540] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4545] device (eth1): carrier: link connected
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4549] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4552] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b349ef0a-2e86-3046-8ae1-ba68f6decadc) (indicated)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4553] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4557] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4562] device (eth1): Activation: starting connection 'Wired connection 1' (b349ef0a-2e86-3046-8ae1-ba68f6decadc)
Dec 11 08:02:11 np0005555520 systemd[1]: Started Network Manager.
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4567] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4569] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4571] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4572] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4574] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4576] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4577] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4579] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4581] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4585] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4586] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4594] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4596] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4610] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4615] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4619] device (lo): Activation: successful, device activated.
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4626] dhcp4 (eth0): state changed new lease, address=38.129.56.119
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4631] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 08:02:11 np0005555520 systemd[1]: Starting Network Manager Wait Online...
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4705] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4719] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4721] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4724] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4726] device (eth0): Activation: successful, device activated.
Dec 11 08:02:11 np0005555520 NetworkManager[7203]: <info>  [1765458131.4730] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 08:02:11 np0005555520 python3[7275]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-4432-e290-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:02:21 np0005555520 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:02:41 np0005555520 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:02:55 np0005555520 systemd[4304]: Starting Mark boot as successful...
Dec 11 08:02:55 np0005555520 systemd[4304]: Finished Mark boot as successful.
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0550] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:02:57 np0005555520 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:02:57 np0005555520 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0897] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0901] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0910] device (eth1): Activation: successful, device activated.
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0917] manager: startup complete
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0919] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <warn>  [1765458177.0930] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.0939] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 systemd[1]: Finished Network Manager Wait Online.
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1018] dhcp4 (eth1): canceled DHCP transaction
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1018] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1019] dhcp4 (eth1): state changed no lease
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1036] policy: auto-activating connection 'ci-private-network' (cac4ab7f-801f-52ff-ba00-bece3e223d4a)
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1041] device (eth1): Activation: starting connection 'ci-private-network' (cac4ab7f-801f-52ff-ba00-bece3e223d4a)
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1042] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1046] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1054] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1064] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1104] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1106] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:02:57 np0005555520 NetworkManager[7203]: <info>  [1765458177.1113] device (eth1): Activation: successful, device activated.
Dec 11 08:03:07 np0005555520 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:03:09 np0005555520 python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:03:10 np0005555520 python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765458189.2018826-259-42964196636360/source _original_basename=tmpkdjuznnb follow=False checksum=5c96beb8e6cf1019d24e1de08b4659ffabd625de backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:04:10 np0005555520 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Dec 11 08:05:55 np0005555520 systemd[4304]: Created slice User Background Tasks Slice.
Dec 11 08:05:55 np0005555520 systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Dec 11 08:05:55 np0005555520 systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Dec 11 08:09:00 np0005555520 systemd-logind[786]: New session 3 of user zuul.
Dec 11 08:09:00 np0005555520 systemd[1]: Started Session 3 of User zuul.
Dec 11 08:09:01 np0005555520 python3[7513]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-4439-93ff-000000001f21-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:09:01 np0005555520 python3[7541]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:09:01 np0005555520 python3[7568]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:09:01 np0005555520 python3[7594]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:09:02 np0005555520 python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:09:02 np0005555520 python3[7646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:09:03 np0005555520 python3[7724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:09:03 np0005555520 python3[7797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765458542.7682967-496-230995963655518/source _original_basename=tmpl3c058st follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:09:04 np0005555520 python3[7847]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:09:04 np0005555520 systemd[1]: Reloading.
Dec 11 08:09:04 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:09:06 np0005555520 python3[7903]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 11 08:09:06 np0005555520 python3[7929]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:09:06 np0005555520 python3[7957]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:09:07 np0005555520 python3[7985]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:09:07 np0005555520 python3[8013]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:09:07 np0005555520 python3[8040]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-4439-93ff-000000001f28-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:09:08 np0005555520 python3[8070]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 08:09:10 np0005555520 systemd[1]: session-3.scope: Deactivated successfully.
Dec 11 08:09:10 np0005555520 systemd[1]: session-3.scope: Consumed 4.246s CPU time.
Dec 11 08:09:10 np0005555520 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Dec 11 08:09:10 np0005555520 systemd-logind[786]: Removed session 3.
Dec 11 08:09:11 np0005555520 systemd-logind[786]: New session 4 of user zuul.
Dec 11 08:09:11 np0005555520 systemd[1]: Started Session 4 of User zuul.
Dec 11 08:09:12 np0005555520 python3[8104]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 08:09:30 np0005555520 kernel: SELinux:  Converting 384 SID table entries...
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:09:30 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:09:40 np0005555520 kernel: SELinux:  Converting 384 SID table entries...
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:09:40 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:09:48 np0005555520 kernel: SELinux:  Converting 384 SID table entries...
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:09:48 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:09:50 np0005555520 setsebool[8170]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 11 08:09:50 np0005555520 setsebool[8170]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 11 08:10:02 np0005555520 kernel: SELinux:  Converting 387 SID table entries...
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:10:02 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:10:20 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 11 08:10:20 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:10:20 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:10:20 np0005555520 systemd[1]: Reloading.
Dec 11 08:10:20 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:10:20 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:10:34 np0005555520 python3[17537]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-06ad-b7fa-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:10:34 np0005555520 kernel: evm: overlay not supported
Dec 11 08:10:34 np0005555520 systemd[4304]: Starting D-Bus User Message Bus...
Dec 11 08:10:34 np0005555520 dbus-broker-launch[18025]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 11 08:10:34 np0005555520 dbus-broker-launch[18025]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 11 08:10:34 np0005555520 systemd[4304]: Started D-Bus User Message Bus.
Dec 11 08:10:34 np0005555520 dbus-broker-lau[18025]: Ready
Dec 11 08:10:34 np0005555520 systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 11 08:10:34 np0005555520 systemd[4304]: Created slice Slice /user.
Dec 11 08:10:34 np0005555520 systemd[4304]: podman-17951.scope: unit configures an IP firewall, but not running as root.
Dec 11 08:10:34 np0005555520 systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Dec 11 08:10:35 np0005555520 systemd[4304]: Started podman-17951.scope.
Dec 11 08:10:35 np0005555520 systemd[4304]: Started podman-pause-1b6674ef.scope.
Dec 11 08:10:36 np0005555520 python3[18428]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.98:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.98:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:10:36 np0005555520 python3[18428]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 11 08:10:36 np0005555520 systemd[1]: session-4.scope: Deactivated successfully.
Dec 11 08:10:36 np0005555520 systemd[1]: session-4.scope: Consumed 1min 715ms CPU time.
Dec 11 08:10:36 np0005555520 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Dec 11 08:10:36 np0005555520 systemd-logind[786]: Removed session 4.
Dec 11 08:10:48 np0005555520 irqbalance[781]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 11 08:10:48 np0005555520 irqbalance[781]: IRQ 27 affinity is now unmanaged
Dec 11 08:11:00 np0005555520 systemd-logind[786]: New session 5 of user zuul.
Dec 11 08:11:00 np0005555520 systemd[1]: Started Session 5 of User zuul.
Dec 11 08:11:01 np0005555520 python3[28628]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWjP/CoJccSPs/IRiR+lNUXTGgFMA8hFO7UweJoFdBTKagLhXHOTkQT8pcE9bcIHncIDir1LOC5d9Qw718SeG0= zuul@np0005555519.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:11:01 np0005555520 python3[28831]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWjP/CoJccSPs/IRiR+lNUXTGgFMA8hFO7UweJoFdBTKagLhXHOTkQT8pcE9bcIHncIDir1LOC5d9Qw718SeG0= zuul@np0005555519.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:11:02 np0005555520 python3[29168]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005555520.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 11 08:11:02 np0005555520 python3[29356]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWjP/CoJccSPs/IRiR+lNUXTGgFMA8hFO7UweJoFdBTKagLhXHOTkQT8pcE9bcIHncIDir1LOC5d9Qw718SeG0= zuul@np0005555519.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:11:03 np0005555520 python3[29609]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:11:03 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:11:03 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:11:03 np0005555520 systemd[1]: man-db-cache-update.service: Consumed 52.346s CPU time.
Dec 11 08:11:03 np0005555520 systemd[1]: run-r30d0699f062044d8bb9693c31b64b737.service: Deactivated successfully.
Dec 11 08:11:03 np0005555520 python3[29847]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765458663.0044084-135-252534890397281/source _original_basename=tmp0e__fagc follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:11:04 np0005555520 python3[29909]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 11 08:11:04 np0005555520 systemd[1]: Starting Hostname Service...
Dec 11 08:11:04 np0005555520 systemd[1]: Started Hostname Service.
Dec 11 08:11:04 np0005555520 systemd-hostnamed[29913]: Changed pretty hostname to 'compute-0'
Dec 11 08:11:04 np0005555520 systemd-hostnamed[29913]: Hostname set to <compute-0> (static)
Dec 11 08:11:04 np0005555520 NetworkManager[7203]: <info>  [1765458664.6158] hostname: static hostname changed from "np0005555520.novalocal" to "compute-0"
Dec 11 08:11:04 np0005555520 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:11:04 np0005555520 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:11:05 np0005555520 systemd[1]: session-5.scope: Deactivated successfully.
Dec 11 08:11:05 np0005555520 systemd[1]: session-5.scope: Consumed 2.190s CPU time.
Dec 11 08:11:05 np0005555520 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Dec 11 08:11:05 np0005555520 systemd-logind[786]: Removed session 5.
Dec 11 08:11:14 np0005555520 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:11:34 np0005555520 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:14:11 np0005555520 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 11 08:14:11 np0005555520 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 11 08:14:11 np0005555520 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 11 08:14:11 np0005555520 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 11 08:16:01 np0005555520 systemd-logind[786]: New session 6 of user zuul.
Dec 11 08:16:01 np0005555520 systemd[1]: Started Session 6 of User zuul.
Dec 11 08:16:02 np0005555520 python3[30015]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:16:03 np0005555520 python3[30131]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:04 np0005555520 python3[30204]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:16:04 np0005555520 python3[30230]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:05 np0005555520 python3[30303]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:16:05 np0005555520 python3[30329]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:06 np0005555520 python3[30402]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:16:06 np0005555520 python3[30428]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:06 np0005555520 python3[30501]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:16:06 np0005555520 python3[30527]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:07 np0005555520 python3[30600]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:16:07 np0005555520 python3[30626]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:07 np0005555520 python3[30699]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:16:08 np0005555520 python3[30725]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:16:08 np0005555520 python3[30798]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765458963.5577004-33665-154329227549066/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:18:50 np0005555520 python3[30860]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:23:50 np0005555520 systemd[1]: session-6.scope: Deactivated successfully.
Dec 11 08:23:50 np0005555520 systemd[1]: session-6.scope: Consumed 4.858s CPU time.
Dec 11 08:23:50 np0005555520 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Dec 11 08:23:50 np0005555520 systemd-logind[786]: Removed session 6.
Dec 11 08:29:45 np0005555520 systemd[1]: Starting dnf makecache...
Dec 11 08:29:45 np0005555520 dnf[30870]: Failed determining last makecache time.
Dec 11 08:29:45 np0005555520 dnf[30870]: delorean-openstack-barbican-42b4c41831408a8e323 321 kB/s |  13 kB     00:00
Dec 11 08:29:45 np0005555520 dnf[30870]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 548 kB/s |  65 kB     00:00
Dec 11 08:29:45 np0005555520 dnf[30870]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.1 MB/s |  32 kB     00:00
Dec 11 08:29:45 np0005555520 dnf[30870]: delorean-python-stevedore-c4acc5639fd2329372142 4.5 MB/s | 131 kB     00:00
Dec 11 08:29:45 np0005555520 dnf[30870]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.4 MB/s |  32 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-os-refresh-config-9bfc52b5049be2d8de61  11 MB/s | 349 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.7 MB/s |  42 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-python-designate-tests-tempest-347fdbc 686 kB/s |  18 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-glance-1fd12c29b339f30fe823e 729 kB/s |  18 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.1 MB/s |  29 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-manila-3c01b7181572c95dac462 274 kB/s |  25 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-python-whitebox-neutron-tests-tempest- 3.1 MB/s | 154 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-octavia-ba397f07a7331190208c 1.0 MB/s |  26 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-watcher-c014f81a8647287f6dcc 165 kB/s |  16 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-ansible-config_template-5ccaa22121a7ff 304 kB/s | 7.4 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 5.6 MB/s | 144 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-swift-dc98a8463506ac520c469a 533 kB/s |  14 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.4 MB/s |  96 kB     00:00
Dec 11 08:29:46 np0005555520 dnf[30870]: CentOS Stream 9 - BaseOS                         65 kB/s | 7.3 kB     00:00
Dec 11 08:29:47 np0005555520 dnf[30870]: CentOS Stream 9 - AppStream                      34 kB/s | 7.8 kB     00:00
Dec 11 08:29:47 np0005555520 dnf[30870]: CentOS Stream 9 - CRB                            71 kB/s | 7.2 kB     00:00
Dec 11 08:29:47 np0005555520 dnf[30870]: CentOS Stream 9 - Extras packages                70 kB/s | 8.3 kB     00:00
Dec 11 08:29:47 np0005555520 dnf[30870]: dlrn-antelope-testing                            22 MB/s | 1.1 MB     00:00
Dec 11 08:29:48 np0005555520 dnf[30870]: dlrn-antelope-build-deps                         14 MB/s | 461 kB     00:00
Dec 11 08:29:48 np0005555520 dnf[30870]: centos9-rabbitmq                                8.4 MB/s | 123 kB     00:00
Dec 11 08:29:48 np0005555520 dnf[30870]: centos9-storage                                  17 MB/s | 415 kB     00:00
Dec 11 08:29:48 np0005555520 dnf[30870]: centos9-opstools                                3.5 MB/s |  51 kB     00:00
Dec 11 08:29:48 np0005555520 dnf[30870]: NFV SIG OpenvSwitch                              18 MB/s | 457 kB     00:00
Dec 11 08:29:49 np0005555520 dnf[30870]: repo-setup-centos-appstream                      75 MB/s |  26 MB     00:00
Dec 11 08:29:55 np0005555520 dnf[30870]: repo-setup-centos-baseos                         68 MB/s | 8.8 MB     00:00
Dec 11 08:29:56 np0005555520 dnf[30870]: repo-setup-centos-highavailability               26 MB/s | 744 kB     00:00
Dec 11 08:29:57 np0005555520 dnf[30870]: repo-setup-centos-powertools                     59 MB/s | 7.4 MB     00:00
Dec 11 08:30:00 np0005555520 dnf[30870]: Extra Packages for Enterprise Linux 9 - x86_64   14 MB/s |  20 MB     00:01
Dec 11 08:30:13 np0005555520 dnf[30870]: Metadata cache created.
Dec 11 08:30:13 np0005555520 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 11 08:30:13 np0005555520 systemd[1]: Finished dnf makecache.
Dec 11 08:30:13 np0005555520 systemd[1]: dnf-makecache.service: Consumed 25.587s CPU time.
Dec 11 08:32:12 np0005555520 systemd-logind[786]: New session 7 of user zuul.
Dec 11 08:32:12 np0005555520 systemd[1]: Started Session 7 of User zuul.
Dec 11 08:32:13 np0005555520 python3.9[31133]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:32:14 np0005555520 python3.9[31314]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:32:24 np0005555520 systemd[1]: session-7.scope: Deactivated successfully.
Dec 11 08:32:24 np0005555520 systemd[1]: session-7.scope: Consumed 8.001s CPU time.
Dec 11 08:32:24 np0005555520 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Dec 11 08:32:24 np0005555520 systemd-logind[786]: Removed session 7.
Dec 11 08:32:30 np0005555520 systemd-logind[786]: New session 8 of user zuul.
Dec 11 08:32:30 np0005555520 systemd[1]: Started Session 8 of User zuul.
Dec 11 08:32:31 np0005555520 python3.9[31525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:32:32 np0005555520 systemd[1]: session-8.scope: Deactivated successfully.
Dec 11 08:32:32 np0005555520 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Dec 11 08:32:32 np0005555520 systemd-logind[786]: Removed session 8.
Dec 11 08:32:49 np0005555520 systemd-logind[786]: New session 9 of user zuul.
Dec 11 08:32:49 np0005555520 systemd[1]: Started Session 9 of User zuul.
Dec 11 08:32:49 np0005555520 python3.9[31707]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 11 08:32:51 np0005555520 python3.9[31881]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:32:52 np0005555520 python3.9[32033]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:32:53 np0005555520 python3.9[32186]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:32:53 np0005555520 python3.9[32338]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:54 np0005555520 python3.9[32490]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:32:55 np0005555520 python3.9[32613]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765459974.0525978-73-136580090392898/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:56 np0005555520 python3.9[32765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:32:57 np0005555520 python3.9[32921]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:32:58 np0005555520 python3.9[33073]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:32:59 np0005555520 python3.9[33223]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:33:02 np0005555520 python3.9[33478]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:33:02 np0005555520 python3.9[33628]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:33:04 np0005555520 python3.9[33782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:33:04 np0005555520 python3.9[33940]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:33:05 np0005555520 python3.9[34024]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:33:38 np0005555520 irqbalance[781]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 11 08:33:38 np0005555520 irqbalance[781]: IRQ 26 affinity is now unmanaged
Dec 11 08:33:49 np0005555520 systemd[1]: Reloading.
Dec 11 08:33:50 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:33:50 np0005555520 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 11 08:33:50 np0005555520 systemd[1]: Reloading.
Dec 11 08:33:50 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:33:50 np0005555520 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 11 08:33:50 np0005555520 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 11 08:33:50 np0005555520 systemd[1]: Reloading.
Dec 11 08:33:50 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:33:50 np0005555520 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 11 08:33:51 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:33:51 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:33:51 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:35:00 np0005555520 kernel: SELinux:  Converting 2719 SID table entries...
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:35:00 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:35:01 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 11 08:35:01 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:35:01 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:35:01 np0005555520 systemd[1]: Reloading.
Dec 11 08:35:01 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:35:01 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:35:02 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:35:02 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:35:02 np0005555520 systemd[1]: man-db-cache-update.service: Consumed 1.117s CPU time.
Dec 11 08:35:02 np0005555520 systemd[1]: run-r1f73b4be98334e8a973025928839e75e.service: Deactivated successfully.
Dec 11 08:35:02 np0005555520 python3.9[35571]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:04 np0005555520 python3.9[35852]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 11 08:35:05 np0005555520 python3.9[36004]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 11 08:35:08 np0005555520 python3.9[36157]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:35:08 np0005555520 python3.9[36309]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 11 08:35:10 np0005555520 python3.9[36461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:35:10 np0005555520 python3.9[36613]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:35:11 np0005555520 python3.9[36736]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460110.5243256-236-135616401800437/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:35:14 np0005555520 python3.9[36888]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:35:16 np0005555520 python3.9[37040]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:17 np0005555520 python3.9[37193]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:35:18 np0005555520 python3.9[37345]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 11 08:35:18 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:35:19 np0005555520 python3.9[37499]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 08:35:20 np0005555520 python3.9[37657]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 08:35:21 np0005555520 python3.9[37817]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 11 08:35:21 np0005555520 python3.9[37970]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 08:35:22 np0005555520 python3.9[38128]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 11 08:35:23 np0005555520 python3.9[38280]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:35:25 np0005555520 python3.9[38433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:35:26 np0005555520 python3.9[38587]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:35:26 np0005555520 python3.9[38710]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460125.7803845-355-228682608657619/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:35:27 np0005555520 python3.9[38862]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:35:27 np0005555520 systemd[1]: Starting Load Kernel Modules...
Dec 11 08:35:27 np0005555520 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 11 08:35:27 np0005555520 kernel: Bridge firewalling registered
Dec 11 08:35:27 np0005555520 systemd-modules-load[38866]: Inserted module 'br_netfilter'
Dec 11 08:35:27 np0005555520 systemd[1]: Finished Load Kernel Modules.
Dec 11 08:35:28 np0005555520 python3.9[39022]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:35:29 np0005555520 python3.9[39145]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460128.0365138-378-7451974977920/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:35:30 np0005555520 python3.9[39297]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:35:33 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:35:33 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:35:33 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:35:33 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:35:33 np0005555520 systemd[1]: Reloading.
Dec 11 08:35:33 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:35:33 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:35:35 np0005555520 python3.9[40610]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:35:35 np0005555520 python3.9[41558]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 11 08:35:36 np0005555520 python3.9[42319]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:35:37 np0005555520 python3.9[43128]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:37 np0005555520 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 11 08:35:37 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:35:37 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:35:37 np0005555520 systemd[1]: man-db-cache-update.service: Consumed 5.189s CPU time.
Dec 11 08:35:37 np0005555520 systemd[1]: run-r6764d9a53af84560b1d3d2b2fe5c477b.service: Deactivated successfully.
Dec 11 08:35:37 np0005555520 systemd[1]: Starting Authorization Manager...
Dec 11 08:35:37 np0005555520 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 11 08:35:37 np0005555520 polkitd[43686]: Started polkitd version 0.117
Dec 11 08:35:37 np0005555520 systemd[1]: Started Authorization Manager.
Dec 11 08:35:38 np0005555520 python3.9[43856]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:35:38 np0005555520 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 11 08:35:38 np0005555520 systemd[1]: tuned.service: Deactivated successfully.
Dec 11 08:35:38 np0005555520 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 11 08:35:38 np0005555520 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 11 08:35:39 np0005555520 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 11 08:35:39 np0005555520 python3.9[44017]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 11 08:35:42 np0005555520 python3.9[44169]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:35:42 np0005555520 systemd[1]: Reloading.
Dec 11 08:35:42 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:35:43 np0005555520 python3.9[44359]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:35:43 np0005555520 systemd[1]: Reloading.
Dec 11 08:35:43 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:35:44 np0005555520 python3.9[44548]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:44 np0005555520 python3.9[44701]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:44 np0005555520 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 11 08:35:45 np0005555520 python3.9[44854]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:47 np0005555520 python3.9[45016]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:48 np0005555520 python3.9[45169]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:35:48 np0005555520 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 11 08:35:48 np0005555520 systemd[1]: Stopped Apply Kernel Variables.
Dec 11 08:35:48 np0005555520 systemd[1]: Stopping Apply Kernel Variables...
Dec 11 08:35:48 np0005555520 systemd[1]: Starting Apply Kernel Variables...
Dec 11 08:35:48 np0005555520 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 11 08:35:48 np0005555520 systemd[1]: Finished Apply Kernel Variables.
Dec 11 08:35:49 np0005555520 systemd[1]: session-9.scope: Deactivated successfully.
Dec 11 08:35:49 np0005555520 systemd[1]: session-9.scope: Consumed 2min 18.589s CPU time.
Dec 11 08:35:49 np0005555520 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Dec 11 08:35:49 np0005555520 systemd-logind[786]: Removed session 9.
Dec 11 08:35:54 np0005555520 systemd-logind[786]: New session 10 of user zuul.
Dec 11 08:35:54 np0005555520 systemd[1]: Started Session 10 of User zuul.
Dec 11 08:35:55 np0005555520 python3.9[45353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:35:56 np0005555520 python3.9[45507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:35:58 np0005555520 python3.9[45663]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:35:59 np0005555520 python3.9[45814]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:36:00 np0005555520 python3.9[45970]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:36:01 np0005555520 python3.9[46054]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:36:03 np0005555520 python3.9[46207]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:36:04 np0005555520 python3.9[46378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:36:04 np0005555520 python3.9[46530]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:36:04 np0005555520 systemd[1]: var-lib-containers-storage-overlay-compat506080103-merged.mount: Deactivated successfully.
Dec 11 08:36:04 np0005555520 podman[46531]: 2025-12-11 13:36:04.958031935 +0000 UTC m=+0.053272217 system refresh
Dec 11 08:36:05 np0005555520 python3.9[46693]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:36:05 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:36:06 np0005555520 python3.9[46816]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460165.1573796-109-258842014983995/.source.json follow=False _original_basename=podman_network_config.j2 checksum=416c20bea64a9567c0f4d9b761e43106b620b9d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:36:07 np0005555520 python3.9[46968]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:36:07 np0005555520 python3.9[47091]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460166.729892-124-102081842017710/.source.conf follow=False _original_basename=registries.conf.j2 checksum=74ad3fdf1c9c551f4957cab58c04bb2f8b0dc3e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:36:08 np0005555520 python3.9[47243]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:36:09 np0005555520 python3.9[47395]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:36:09 np0005555520 python3.9[47547]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:36:10 np0005555520 python3.9[47699]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:36:11 np0005555520 python3.9[47849]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:36:12 np0005555520 python3.9[48003]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:14 np0005555520 python3.9[48156]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:16 np0005555520 python3.9[48316]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:18 np0005555520 python3.9[48471]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:20 np0005555520 python3.9[48624]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:23 np0005555520 python3.9[48780]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:26 np0005555520 python3.9[48949]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:28 np0005555520 python3.9[49102]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:41 np0005555520 python3.9[49439]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:36:43 np0005555520 python3.9[49595]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:36:44 np0005555520 python3.9[49770]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:36:44 np0005555520 python3.9[49893]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765460203.8379338-272-53274487734007/.source.json _original_basename=.k43z76um follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:36:46 np0005555520 python3.9[50045]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:36:46 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:36:48 np0005555520 systemd[1]: var-lib-containers-storage-overlay-compat2396438791-lower\x2dmapped.mount: Deactivated successfully.
Dec 11 08:36:52 np0005555520 podman[50058]: 2025-12-11 13:36:52.440989717 +0000 UTC m=+6.340241436 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 11 08:36:52 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:36:52 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:36:52 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:36:53 np0005555520 python3.9[50357]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:36:53 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:04 np0005555520 podman[50369]: 2025-12-11 13:37:04.008033195 +0000 UTC m=+10.471611460 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 08:37:04 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:04 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:04 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:05 np0005555520 python3.9[50667]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:37:05 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:06 np0005555520 podman[50679]: 2025-12-11 13:37:06.806435234 +0000 UTC m=+1.594616833 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 11 08:37:06 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:06 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:06 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:07 np0005555520 python3.9[50914]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:37:07 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:21 np0005555520 podman[50927]: 2025-12-11 13:37:21.512061403 +0000 UTC m=+13.646163692 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 11 08:37:21 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:21 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:21 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:22 np0005555520 python3.9[51188]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:37:22 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:44 np0005555520 podman[51199]: 2025-12-11 13:37:44.152524916 +0000 UTC m=+21.331854076 image pull 80890c1805dd88d2c8dac263b5abd3451d9e16dafe570d08a1aea1bc4a84ee52 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 11 08:37:44 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:44 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:44 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:45 np0005555520 python3.9[51523]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:37:46 np0005555520 podman[51535]: 2025-12-11 13:37:46.643501426 +0000 UTC m=+1.500787942 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 11 08:37:46 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:46 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:46 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:47 np0005555520 python3.9[51810]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:37:47 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:51 np0005555520 podman[51822]: 2025-12-11 13:37:51.088769446 +0000 UTC m=+3.417354757 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 11 08:37:51 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:51 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:51 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:37:52 np0005555520 python3.9[52078]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 11 08:37:52 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:38:00 np0005555520 podman[52091]: 2025-12-11 13:38:00.552312627 +0000 UTC m=+8.111045358 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 11 08:38:00 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:38:00 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:38:00 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:38:01 np0005555520 systemd[1]: session-10.scope: Deactivated successfully.
Dec 11 08:38:01 np0005555520 systemd[1]: session-10.scope: Consumed 2min 37.714s CPU time.
Dec 11 08:38:01 np0005555520 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Dec 11 08:38:01 np0005555520 systemd-logind[786]: Removed session 10.
Dec 11 08:38:07 np0005555520 systemd-logind[786]: New session 11 of user zuul.
Dec 11 08:38:07 np0005555520 systemd[1]: Started Session 11 of User zuul.
Dec 11 08:38:08 np0005555520 python3.9[52509]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:38:09 np0005555520 python3.9[52665]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 11 08:38:10 np0005555520 python3.9[52818]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 08:38:11 np0005555520 python3.9[52976]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 08:38:12 np0005555520 python3.9[53136]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:38:13 np0005555520 python3.9[53220]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:38:15 np0005555520 python3.9[53382]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:38:32 np0005555520 kernel: SELinux:  Converting 2733 SID table entries...
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:38:32 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:38:32 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 11 08:38:32 np0005555520 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 11 08:38:33 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:38:33 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:38:33 np0005555520 systemd[1]: Reloading.
Dec 11 08:38:33 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:38:33 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:38:34 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:38:34 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:38:34 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:38:34 np0005555520 systemd[1]: run-r917a867e6f314e6d8da01ac7a52c8245.service: Deactivated successfully.
Dec 11 08:38:35 np0005555520 python3.9[54482]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:38:35 np0005555520 systemd[1]: Reloading.
Dec 11 08:38:35 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:38:35 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:38:36 np0005555520 systemd[1]: Starting Open vSwitch Database Unit...
Dec 11 08:38:36 np0005555520 chown[54524]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 11 08:38:36 np0005555520 ovs-ctl[54529]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 11 08:38:36 np0005555520 ovs-ctl[54529]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 11 08:38:36 np0005555520 ovs-ctl[54529]: Starting ovsdb-server [  OK  ]
Dec 11 08:38:36 np0005555520 ovs-vsctl[54578]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 11 08:38:36 np0005555520 ovs-vsctl[54598]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"91d1351c-e9c8-4a9c-80fe-965b575ecbf6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 11 08:38:36 np0005555520 ovs-ctl[54529]: Configuring Open vSwitch system IDs [  OK  ]
Dec 11 08:38:36 np0005555520 ovs-ctl[54529]: Enabling remote OVSDB managers [  OK  ]
Dec 11 08:38:36 np0005555520 ovs-vsctl[54604]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 11 08:38:36 np0005555520 systemd[1]: Started Open vSwitch Database Unit.
Dec 11 08:38:36 np0005555520 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 11 08:38:36 np0005555520 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 11 08:38:36 np0005555520 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 11 08:38:36 np0005555520 kernel: openvswitch: Open vSwitch switching datapath
Dec 11 08:38:36 np0005555520 ovs-ctl[54648]: Inserting openvswitch module [  OK  ]
Dec 11 08:38:36 np0005555520 ovs-ctl[54617]: Starting ovs-vswitchd [  OK  ]
Dec 11 08:38:36 np0005555520 ovs-vsctl[54666]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 11 08:38:36 np0005555520 ovs-ctl[54617]: Enabling remote OVSDB managers [  OK  ]
Dec 11 08:38:36 np0005555520 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 11 08:38:36 np0005555520 systemd[1]: Starting Open vSwitch...
Dec 11 08:38:36 np0005555520 systemd[1]: Finished Open vSwitch.
Dec 11 08:38:37 np0005555520 python3.9[54817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:38:38 np0005555520 python3.9[54969]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 11 08:38:39 np0005555520 kernel: SELinux:  Converting 2747 SID table entries...
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:38:39 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:38:40 np0005555520 python3.9[55125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:38:41 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 11 08:38:41 np0005555520 python3.9[55283]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:38:44 np0005555520 python3.9[55436]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:38:45 np0005555520 python3.9[55723]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 11 08:38:46 np0005555520 python3.9[55873]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:38:47 np0005555520 python3.9[56027]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:38:49 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:38:49 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:38:49 np0005555520 systemd[1]: Reloading.
Dec 11 08:38:49 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:38:49 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:38:49 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:38:49 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:38:49 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:38:49 np0005555520 systemd[1]: run-r5324b44c3f6d4aeea1a433d7f13b1f6c.service: Deactivated successfully.
Dec 11 08:38:50 np0005555520 python3.9[56343]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:38:50 np0005555520 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 11 08:38:50 np0005555520 systemd[1]: Stopped Network Manager Wait Online.
Dec 11 08:38:50 np0005555520 systemd[1]: Stopping Network Manager Wait Online...
Dec 11 08:38:50 np0005555520 systemd[1]: Stopping Network Manager...
Dec 11 08:38:50 np0005555520 NetworkManager[7203]: <info>  [1765460330.8875] caught SIGTERM, shutting down normally.
Dec 11 08:38:50 np0005555520 NetworkManager[7203]: <info>  [1765460330.8904] dhcp4 (eth0): canceled DHCP transaction
Dec 11 08:38:50 np0005555520 NetworkManager[7203]: <info>  [1765460330.8905] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:38:50 np0005555520 NetworkManager[7203]: <info>  [1765460330.8905] dhcp4 (eth0): state changed no lease
Dec 11 08:38:50 np0005555520 NetworkManager[7203]: <info>  [1765460330.8911] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:38:50 np0005555520 NetworkManager[7203]: <info>  [1765460330.9003] exiting (success)
Dec 11 08:38:50 np0005555520 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:38:50 np0005555520 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:38:50 np0005555520 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 11 08:38:50 np0005555520 systemd[1]: Stopped Network Manager.
Dec 11 08:38:50 np0005555520 systemd[1]: NetworkManager.service: Consumed 15.809s CPU time, 4.1M memory peak, read 0B from disk, written 10.5K to disk.
Dec 11 08:38:50 np0005555520 systemd[1]: Starting Network Manager...
Dec 11 08:38:50 np0005555520 NetworkManager[56353]: <info>  [1765460330.9836] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:e322ae57-3e2e-454a-a9ae-b6dc8afa14c6)
Dec 11 08:38:50 np0005555520 NetworkManager[56353]: <info>  [1765460330.9840] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 08:38:50 np0005555520 NetworkManager[56353]: <info>  [1765460330.9912] manager[0x55880be65000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 08:38:51 np0005555520 systemd[1]: Starting Hostname Service...
Dec 11 08:38:51 np0005555520 systemd[1]: Started Hostname Service.
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0759] hostname: hostname: using hostnamed
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0760] hostname: static hostname changed from (none) to "compute-0"
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0766] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0772] manager[0x55880be65000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0773] manager[0x55880be65000]: rfkill: WWAN hardware radio set enabled
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0797] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0807] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0808] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0808] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0809] manager: Networking is enabled by state file
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0811] settings: Loaded settings plugin: keyfile (internal)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0815] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0843] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0854] dhcp: init: Using DHCP client 'internal'
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0857] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0863] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0868] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0876] device (lo): Activation: starting connection 'lo' (36a70f10-2cec-4899-9bbf-682d0ec4233b)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0882] device (eth0): carrier: link connected
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0886] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0891] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0891] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0897] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0904] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0909] device (eth1): carrier: link connected
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0913] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0917] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (cac4ab7f-801f-52ff-ba00-bece3e223d4a) (indicated)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0918] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0922] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0928] device (eth1): Activation: starting connection 'ci-private-network' (cac4ab7f-801f-52ff-ba00-bece3e223d4a)
Dec 11 08:38:51 np0005555520 systemd[1]: Started Network Manager.
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0934] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0942] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0944] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0946] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0948] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0952] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0955] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0958] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0962] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0968] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.0971] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1009] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1024] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1033] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1035] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1039] device (lo): Activation: successful, device activated.
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1047] dhcp4 (eth0): state changed new lease, address=38.129.56.119
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1054] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1124] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1132] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1133] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 systemd[1]: Starting Network Manager Wait Online...
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1135] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1138] device (eth1): Activation: successful, device activated.
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1168] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1170] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1173] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1176] device (eth0): Activation: successful, device activated.
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1180] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 08:38:51 np0005555520 NetworkManager[56353]: <info>  [1765460331.1206] manager: startup complete
Dec 11 08:38:51 np0005555520 systemd[1]: Finished Network Manager Wait Online.
Dec 11 08:38:51 np0005555520 python3.9[56569]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:38:57 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:38:57 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:38:57 np0005555520 systemd[1]: Reloading.
Dec 11 08:38:57 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:38:57 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:38:57 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:38:59 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:38:59 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:38:59 np0005555520 systemd[1]: run-r5be2bb51f32a4524af1b642d6f498947.service: Deactivated successfully.
Dec 11 08:39:00 np0005555520 python3.9[57033]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:39:01 np0005555520 python3.9[57185]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:01 np0005555520 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:39:02 np0005555520 python3.9[57339]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:02 np0005555520 python3.9[57491]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:03 np0005555520 python3.9[57643]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:04 np0005555520 python3.9[57795]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:04 np0005555520 python3.9[57947]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:39:05 np0005555520 python3.9[58070]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460344.3210063-229-280692539583175/.source _original_basename=.8up6yaaz follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:06 np0005555520 python3.9[58222]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:07 np0005555520 python3.9[58374]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 11 08:39:07 np0005555520 python3.9[58526]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:10 np0005555520 python3.9[58953]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 11 08:39:11 np0005555520 ansible-async_wrapper.py[59128]: Invoked with j203831945745 300 /home/zuul/.ansible/tmp/ansible-tmp-1765460350.425198-295-42941255611100/AnsiballZ_edpm_os_net_config.py _
Dec 11 08:39:11 np0005555520 ansible-async_wrapper.py[59131]: Starting module and watcher
Dec 11 08:39:11 np0005555520 ansible-async_wrapper.py[59131]: Start watching 59132 (300)
Dec 11 08:39:11 np0005555520 ansible-async_wrapper.py[59132]: Start module (59132)
Dec 11 08:39:11 np0005555520 ansible-async_wrapper.py[59128]: Return async_wrapper task started.
Dec 11 08:39:11 np0005555520 python3.9[59133]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 11 08:39:12 np0005555520 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 11 08:39:12 np0005555520 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 11 08:39:12 np0005555520 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 11 08:39:12 np0005555520 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 11 08:39:12 np0005555520 kernel: cfg80211: failed to load regulatory.db
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.4954] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.4982] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5653] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5655] audit: op="connection-add" uuid="01f7a421-9bcd-4a05-a7dd-2f47ff308bb2" name="br-ex-br" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5675] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5676] audit: op="connection-add" uuid="62361ead-a518-446f-bf52-5d8b9babf48e" name="br-ex-port" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5693] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5695] audit: op="connection-add" uuid="06a76e14-0801-48d7-9285-0ef019457ea7" name="eth1-port" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5710] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5711] audit: op="connection-add" uuid="deb5fab4-e88d-45f4-b255-b7434974bdf1" name="vlan20-port" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5728] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5729] audit: op="connection-add" uuid="7f308677-b6cf-4f43-b5da-58b596e90ef0" name="vlan21-port" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5744] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5745] audit: op="connection-add" uuid="2b2859a3-eade-437c-aae4-a28445b93a76" name="vlan22-port" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5772] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.timestamp,connection.autoconnect-priority" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5794] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5796] audit: op="connection-add" uuid="d222eb3e-7853-4f73-b4e4-1cb4cf3d3a42" name="br-ex-if" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5882] audit: op="connection-update" uuid="cac4ab7f-801f-52ff-ba00-bece3e223d4a" name="ci-private-network" args="ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.routes,ipv4.routing-rules,ovs-interface.type,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ovs-external-ids.data,connection.master,connection.timestamp,connection.slave-type,connection.controller,connection.port-type" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5908] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5910] audit: op="connection-add" uuid="db42ad8c-6e44-4853-87d9-6517f3dae50d" name="vlan20-if" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5935] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5938] audit: op="connection-add" uuid="740684cc-db1b-4840-b9fe-c7eca00969e2" name="vlan21-if" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5961] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5963] audit: op="connection-add" uuid="20ddb93c-a92b-4a7f-8656-d16ec06045d6" name="vlan22-if" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.5982] audit: op="connection-delete" uuid="b349ef0a-2e86-3046-8ae1-ba68f6decadc" name="Wired connection 1" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6000] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6005] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6016] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6022] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (01f7a421-9bcd-4a05-a7dd-2f47ff308bb2)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6022] audit: op="connection-activate" uuid="01f7a421-9bcd-4a05-a7dd-2f47ff308bb2" name="br-ex-br" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6025] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6026] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6032] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6039] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (62361ead-a518-446f-bf52-5d8b9babf48e)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6042] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6043] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6049] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6055] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (06a76e14-0801-48d7-9285-0ef019457ea7)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6057] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6059] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6069] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6075] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (deb5fab4-e88d-45f4-b255-b7434974bdf1)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6078] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6079] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6086] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6093] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7f308677-b6cf-4f43-b5da-58b596e90ef0)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6095] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6096] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6104] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6109] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2b2859a3-eade-437c-aae4-a28445b93a76)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6110] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6114] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6116] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6125] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6126] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6131] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6138] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (d222eb3e-7853-4f73-b4e4-1cb4cf3d3a42)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6140] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6145] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6148] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6149] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6151] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6165] device (eth1): disconnecting for new activation request.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6166] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6171] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6173] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6174] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6178] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6180] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6184] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6190] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (db42ad8c-6e44-4853-87d9-6517f3dae50d)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6191] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6195] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6198] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6200] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6203] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6205] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6210] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6217] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (740684cc-db1b-4840-b9fe-c7eca00969e2)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6218] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6223] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6225] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6227] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6233] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <warn>  [1765460353.6235] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6239] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6245] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (20ddb93c-a92b-4a7f-8656-d16ec06045d6)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6246] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6251] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6254] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6256] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6259] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6274] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6277] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6281] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6283] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6291] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6296] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6302] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6307] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6310] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6316] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6321] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 kernel: ovs-system: entered promiscuous mode
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6345] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 kernel: Timeout policy base is empty
Dec 11 08:39:13 np0005555520 systemd-udevd[59137]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6351] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6363] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6375] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6382] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6387] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6396] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6406] dhcp4 (eth0): canceled DHCP transaction
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6406] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6407] dhcp4 (eth0): state changed no lease
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6410] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6433] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6441] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59134 uid=0 result="fail" reason="Device is not activated"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6461] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6504] device (eth1): disconnecting for new activation request.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6506] audit: op="connection-activate" uuid="cac4ab7f-801f-52ff-ba00-bece3e223d4a" name="ci-private-network" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6508] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6513] dhcp4 (eth0): state changed new lease, address=38.129.56.119
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6519] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6595] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59134 uid=0 result="success"
Dec 11 08:39:13 np0005555520 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6652] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 11 08:39:13 np0005555520 kernel: br-ex: entered promiscuous mode
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6747] device (eth1): Activation: starting connection 'ci-private-network' (cac4ab7f-801f-52ff-ba00-bece3e223d4a)
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6765] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6768] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6782] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6785] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6787] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6789] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6791] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6793] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6804] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6815] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6820] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6826] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6831] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6836] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6840] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6845] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6850] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6855] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6859] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6865] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6869] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 kernel: vlan22: entered promiscuous mode
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6878] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 11 08:39:13 np0005555520 systemd-udevd[59139]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6886] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6898] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6931] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 kernel: vlan20: entered promiscuous mode
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6943] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6955] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.6965] device (eth1): Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 kernel: vlan21: entered promiscuous mode
Dec 11 08:39:13 np0005555520 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7033] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 11 08:39:13 np0005555520 systemd-udevd[59231]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7045] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7050] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7057] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7087] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7137] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7139] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7147] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7154] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7180] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7196] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7213] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7227] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7230] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7237] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7249] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7252] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:39:13 np0005555520 NetworkManager[56353]: <info>  [1765460353.7260] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 08:39:14 np0005555520 NetworkManager[56353]: <info>  [1765460354.8736] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.0579] checkpoint[0x55880be39950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.0582] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 python3.9[59468]: ansible-ansible.legacy.async_status Invoked with jid=j203831945745.59128 mode=status _async_dir=/root/.ansible_async
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.3806] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.3826] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.6191] audit: op="networking-control" arg="global-dns-configuration" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.6224] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.6263] audit: op="networking-control" arg="global-dns-configuration" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.6289] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.7736] checkpoint[0x55880be39a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 11 08:39:15 np0005555520 NetworkManager[56353]: <info>  [1765460355.7739] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59134 uid=0 result="success"
Dec 11 08:39:15 np0005555520 ansible-async_wrapper.py[59132]: Module complete (59132)
Dec 11 08:39:16 np0005555520 ansible-async_wrapper.py[59131]: Done in kid B.
Dec 11 08:39:18 np0005555520 python3.9[59573]: ansible-ansible.legacy.async_status Invoked with jid=j203831945745.59128 mode=status _async_dir=/root/.ansible_async
Dec 11 08:39:19 np0005555520 python3.9[59673]: ansible-ansible.legacy.async_status Invoked with jid=j203831945745.59128 mode=cleanup _async_dir=/root/.ansible_async
Dec 11 08:39:20 np0005555520 python3.9[59825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:39:20 np0005555520 python3.9[59948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460359.6978261-322-193651044391710/.source.returncode _original_basename=.l04dcl3_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:21 np0005555520 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:39:21 np0005555520 python3.9[60102]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:39:22 np0005555520 python3.9[60226]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460361.1098325-338-157649800239556/.source.cfg _original_basename=.xpibg0b1 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:23 np0005555520 python3.9[60378]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:39:23 np0005555520 systemd[1]: Reloading Network Manager...
Dec 11 08:39:23 np0005555520 NetworkManager[56353]: <info>  [1765460363.3062] audit: op="reload" arg="0" pid=60382 uid=0 result="success"
Dec 11 08:39:23 np0005555520 NetworkManager[56353]: <info>  [1765460363.3069] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 11 08:39:23 np0005555520 systemd[1]: Reloaded Network Manager.
Dec 11 08:39:23 np0005555520 systemd[1]: session-11.scope: Deactivated successfully.
Dec 11 08:39:23 np0005555520 systemd[1]: session-11.scope: Consumed 54.131s CPU time.
Dec 11 08:39:23 np0005555520 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Dec 11 08:39:23 np0005555520 systemd-logind[786]: Removed session 11.
Dec 11 08:39:30 np0005555520 systemd-logind[786]: New session 12 of user zuul.
Dec 11 08:39:30 np0005555520 systemd[1]: Started Session 12 of User zuul.
Dec 11 08:39:31 np0005555520 python3.9[60566]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:39:32 np0005555520 python3.9[60721]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:39:33 np0005555520 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:39:34 np0005555520 python3.9[60911]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:39:34 np0005555520 systemd[1]: session-12.scope: Deactivated successfully.
Dec 11 08:39:34 np0005555520 systemd[1]: session-12.scope: Consumed 2.603s CPU time.
Dec 11 08:39:34 np0005555520 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Dec 11 08:39:34 np0005555520 systemd-logind[786]: Removed session 12.
Dec 11 08:39:40 np0005555520 systemd-logind[786]: New session 13 of user zuul.
Dec 11 08:39:40 np0005555520 systemd[1]: Started Session 13 of User zuul.
Dec 11 08:39:41 np0005555520 python3.9[61093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:39:42 np0005555520 python3.9[61247]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:39:43 np0005555520 python3.9[61405]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:39:44 np0005555520 python3.9[61490]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:39:47 np0005555520 python3.9[61643]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:39:48 np0005555520 python3.9[61834]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:49 np0005555520 python3.9[61986]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:39:49 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:39:50 np0005555520 python3.9[62149]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:39:50 np0005555520 python3.9[62227]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:39:51 np0005555520 python3.9[62379]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:39:52 np0005555520 python3.9[62457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:39:53 np0005555520 python3.9[62609]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:39:54 np0005555520 python3.9[62761]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:39:55 np0005555520 python3.9[62913]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:39:55 np0005555520 python3.9[63065]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:39:56 np0005555520 python3.9[63217]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:39:59 np0005555520 python3.9[63370]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:40:00 np0005555520 python3.9[63524]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:40:01 np0005555520 python3.9[63676]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:40:02 np0005555520 python3.9[63828]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:40:03 np0005555520 python3.9[63981]: ansible-service_facts Invoked
Dec 11 08:40:03 np0005555520 network[63998]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:40:03 np0005555520 network[63999]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:40:03 np0005555520 network[64000]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:40:08 np0005555520 python3.9[64452]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:40:11 np0005555520 python3.9[64605]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 11 08:40:12 np0005555520 python3.9[64757]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:13 np0005555520 python3.9[64882]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460411.977177-232-223010791323405/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:14 np0005555520 python3.9[65036]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:14 np0005555520 python3.9[65161]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460413.7420413-247-73224137180326/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:16 np0005555520 python3.9[65315]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:17 np0005555520 python3.9[65469]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:40:18 np0005555520 python3.9[65553]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:40:19 np0005555520 python3.9[65707]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:40:20 np0005555520 python3.9[65791]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:40:20 np0005555520 chronyd[790]: chronyd exiting
Dec 11 08:40:20 np0005555520 systemd[1]: Stopping NTP client/server...
Dec 11 08:40:20 np0005555520 systemd[1]: chronyd.service: Deactivated successfully.
Dec 11 08:40:20 np0005555520 systemd[1]: Stopped NTP client/server.
Dec 11 08:40:20 np0005555520 systemd[1]: Starting NTP client/server...
Dec 11 08:40:20 np0005555520 chronyd[65800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 11 08:40:20 np0005555520 chronyd[65800]: Frequency -23.744 +/- 0.149 ppm read from /var/lib/chrony/drift
Dec 11 08:40:20 np0005555520 chronyd[65800]: Loaded seccomp filter (level 2)
Dec 11 08:40:20 np0005555520 systemd[1]: Started NTP client/server.
Dec 11 08:40:21 np0005555520 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Dec 11 08:40:21 np0005555520 systemd[1]: session-13.scope: Deactivated successfully.
Dec 11 08:40:21 np0005555520 systemd[1]: session-13.scope: Consumed 28.882s CPU time.
Dec 11 08:40:21 np0005555520 systemd-logind[786]: Removed session 13.
Dec 11 08:40:27 np0005555520 systemd-logind[786]: New session 14 of user zuul.
Dec 11 08:40:27 np0005555520 systemd[1]: Started Session 14 of User zuul.
Dec 11 08:40:28 np0005555520 python3.9[65979]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:40:30 np0005555520 python3.9[66135]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:31 np0005555520 python3.9[66312]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:32 np0005555520 python3.9[66390]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.zkrq6ojp recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:33 np0005555520 python3.9[66542]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:34 np0005555520 python3.9[66665]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460432.796103-61-159234261013919/.source _original_basename=.9pcd3_e2 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:34 np0005555520 python3.9[66817]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:40:35 np0005555520 python3.9[66969]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:36 np0005555520 python3.9[67092]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460435.0772867-85-149222161573396/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:40:36 np0005555520 python3.9[67244]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:37 np0005555520 python3.9[67367]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460436.2562282-85-249198938219130/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:40:38 np0005555520 python3.9[67519]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:38 np0005555520 python3.9[67671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:39 np0005555520 python3.9[67794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460438.1847064-122-121848403203140/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:40 np0005555520 python3.9[67946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:40 np0005555520 python3.9[68069]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460439.4937534-137-116919807347396/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:41 np0005555520 python3.9[68221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:40:41 np0005555520 systemd[1]: Reloading.
Dec 11 08:40:41 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:40:41 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:40:42 np0005555520 systemd[1]: Reloading.
Dec 11 08:40:42 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:40:42 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:40:42 np0005555520 systemd[1]: Starting EDPM Container Shutdown...
Dec 11 08:40:42 np0005555520 systemd[1]: Finished EDPM Container Shutdown.
Dec 11 08:40:43 np0005555520 python3.9[68449]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:43 np0005555520 python3.9[68572]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460442.4761336-160-22507271132218/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:44 np0005555520 python3.9[68724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:45 np0005555520 python3.9[68847]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460443.9689822-175-29529082532868/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:45 np0005555520 python3.9[68999]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:40:46 np0005555520 systemd[1]: Reloading.
Dec 11 08:40:46 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:40:46 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:40:46 np0005555520 systemd[1]: Reloading.
Dec 11 08:40:46 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:40:46 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:40:46 np0005555520 systemd[1]: Starting Create netns directory...
Dec 11 08:40:46 np0005555520 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 11 08:40:46 np0005555520 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 11 08:40:46 np0005555520 systemd[1]: Finished Create netns directory.
Dec 11 08:40:47 np0005555520 python3.9[69226]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:40:47 np0005555520 network[69243]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:40:47 np0005555520 network[69244]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:40:47 np0005555520 network[69245]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:40:51 np0005555520 python3.9[69507]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:40:51 np0005555520 systemd[1]: Reloading.
Dec 11 08:40:52 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:40:52 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:40:52 np0005555520 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 11 08:40:52 np0005555520 iptables.init[69547]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 11 08:40:52 np0005555520 iptables.init[69547]: iptables: Flushing firewall rules: [  OK  ]
Dec 11 08:40:52 np0005555520 systemd[1]: iptables.service: Deactivated successfully.
Dec 11 08:40:52 np0005555520 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 11 08:40:53 np0005555520 python3.9[69743]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:40:54 np0005555520 python3.9[69897]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:40:54 np0005555520 systemd[1]: Reloading.
Dec 11 08:40:54 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:40:54 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:40:54 np0005555520 systemd[1]: Starting Netfilter Tables...
Dec 11 08:40:54 np0005555520 systemd[1]: Finished Netfilter Tables.
Dec 11 08:40:55 np0005555520 python3.9[70089]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:40:57 np0005555520 python3.9[70242]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:40:57 np0005555520 python3.9[70367]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460456.2784848-244-76006055585138/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:40:58 np0005555520 python3.9[70520]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:40:58 np0005555520 systemd[1]: Reloading OpenSSH server daemon...
Dec 11 08:40:58 np0005555520 systemd[1]: Reloaded OpenSSH server daemon.
Dec 11 08:40:59 np0005555520 python3.9[70676]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:00 np0005555520 python3.9[70828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:00 np0005555520 python3.9[70951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460459.7594762-275-145708833378439/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:02 np0005555520 python3.9[71103]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 11 08:41:02 np0005555520 systemd[1]: Starting Time & Date Service...
Dec 11 08:41:02 np0005555520 systemd[1]: Started Time & Date Service.
Dec 11 08:41:02 np0005555520 python3.9[71259]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:03 np0005555520 python3.9[71411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:04 np0005555520 python3.9[71534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460463.059836-310-219731749416285/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:05 np0005555520 python3.9[71686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:05 np0005555520 python3.9[71809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460464.456442-325-40546531811933/.source.yaml _original_basename=.y5584sei follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:06 np0005555520 python3.9[71961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:07 np0005555520 python3.9[72084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460465.990966-340-3714827098051/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:07 np0005555520 python3.9[72236]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:41:08 np0005555520 python3.9[72389]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:41:09 np0005555520 python3[72542]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 08:41:10 np0005555520 python3.9[72694]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:11 np0005555520 python3.9[72817]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460469.965745-379-107897077331942/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:12 np0005555520 python3.9[72969]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:12 np0005555520 python3.9[73092]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460471.4762282-394-129147143989196/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:13 np0005555520 python3.9[73244]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:14 np0005555520 python3.9[73367]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460472.8945475-409-76316396898068/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:14 np0005555520 python3.9[73519]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:15 np0005555520 python3.9[73642]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460474.336064-424-160908740964145/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:16 np0005555520 python3.9[73794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:41:17 np0005555520 python3.9[73917]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460475.9031315-439-15642105289618/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:17 np0005555520 python3.9[74069]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:18 np0005555520 python3.9[74221]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:41:19 np0005555520 python3.9[74382]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:20 np0005555520 python3.9[74535]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:21 np0005555520 python3.9[74687]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:22 np0005555520 python3.9[74839]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 11 08:41:22 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:41:22 np0005555520 python3.9[74993]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 11 08:41:23 np0005555520 systemd[1]: session-14.scope: Deactivated successfully.
Dec 11 08:41:23 np0005555520 systemd[1]: session-14.scope: Consumed 39.791s CPU time.
Dec 11 08:41:23 np0005555520 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Dec 11 08:41:23 np0005555520 systemd-logind[786]: Removed session 14.
Dec 11 08:41:28 np0005555520 systemd-logind[786]: New session 15 of user zuul.
Dec 11 08:41:28 np0005555520 systemd[1]: Started Session 15 of User zuul.
Dec 11 08:41:29 np0005555520 python3.9[75174]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 11 08:41:30 np0005555520 python3.9[75326]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:41:31 np0005555520 python3.9[75478]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:41:32 np0005555520 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 11 08:41:32 np0005555520 python3.9[75632]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL4ZWRNz56jVfdDfJjsWoguC0Xp0EnrWPUPwjZDN5qSQlPPdOzL9CoVn0nqe04QhobD1cMVwrnlASZEQE8KCk2aG30NSj+ppvckjrOetmiC4CS1oxyAIKnG9bFPPlLFX+tHQPsQp314H5ZwZfdV7pu/st71bAtS1g/oVwBhvtqsgnfCLkC3zJ8nFO+8tgHkKDdd6/+zypERoX4inDRMH3XezVAQAFl2L+2plNHO4DWdji6v4XwERdS9L303OuPcjt1NA28tS4OdEGLljGSVKAYfleS6GXMevhyNSeArHs+CQRIuOHZNkFyZBr0/g5uIh2a8/nFyJsWG8BCllfEoXnLU4A1LHpfrrtFfpBpdLdB2HiX9ioLTPVU6yjRipU4/3y/knX8deDquEHqtkR/XattM323OCbUPqCdj+lkfdKqyD58oJkwxqdu63+iFNtEeIq4ooTF9Ac4DvLRscyxF068ScR+hT3VcKQiU9g3DSPfKquUz5JY7apoVwti3frodLs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH4ZXb6e74l1ErPwnFepAglVXrllGhYUMTTFEvhDwFcC#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHhiIL1aJJwfhTFEeDkids4uWyT1OwTcn/oge5AjA9a1ZHhkfs/Z8csWd9kvdRNnwfF30xLHd5a8gsLrkj1aGS8=#012 create=True mode=0644 path=/tmp/ansible.owewtikz state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:33 np0005555520 python3.9[75784]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.owewtikz' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:41:34 np0005555520 python3.9[75938]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.owewtikz state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:34 np0005555520 systemd[1]: session-15.scope: Deactivated successfully.
Dec 11 08:41:34 np0005555520 systemd[1]: session-15.scope: Consumed 3.889s CPU time.
Dec 11 08:41:34 np0005555520 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Dec 11 08:41:34 np0005555520 systemd-logind[786]: Removed session 15.
Dec 11 08:41:40 np0005555520 systemd-logind[786]: New session 16 of user zuul.
Dec 11 08:41:40 np0005555520 systemd[1]: Started Session 16 of User zuul.
Dec 11 08:41:41 np0005555520 python3.9[76116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:41:42 np0005555520 python3.9[76272]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 11 08:41:43 np0005555520 python3.9[76426]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:41:44 np0005555520 python3.9[76579]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:41:45 np0005555520 python3.9[76732]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:41:46 np0005555520 python3.9[76886]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:41:47 np0005555520 python3.9[77041]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:41:47 np0005555520 systemd[1]: session-16.scope: Deactivated successfully.
Dec 11 08:41:47 np0005555520 systemd[1]: session-16.scope: Consumed 5.020s CPU time.
Dec 11 08:41:47 np0005555520 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Dec 11 08:41:47 np0005555520 systemd-logind[786]: Removed session 16.
Dec 11 08:41:53 np0005555520 systemd-logind[786]: New session 17 of user zuul.
Dec 11 08:41:53 np0005555520 systemd[1]: Started Session 17 of User zuul.
Dec 11 08:41:54 np0005555520 python3.9[77219]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:41:55 np0005555520 python3.9[77375]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:41:56 np0005555520 python3.9[77459]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 08:41:58 np0005555520 python3.9[77610]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:00 np0005555520 python3.9[77761]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:42:00 np0005555520 python3.9[77911]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:42:01 np0005555520 python3.9[78061]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:42:02 np0005555520 systemd[1]: session-17.scope: Deactivated successfully.
Dec 11 08:42:02 np0005555520 systemd[1]: session-17.scope: Consumed 6.585s CPU time.
Dec 11 08:42:02 np0005555520 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Dec 11 08:42:02 np0005555520 systemd-logind[786]: Removed session 17.
Dec 11 08:42:07 np0005555520 systemd-logind[786]: New session 18 of user zuul.
Dec 11 08:42:07 np0005555520 systemd[1]: Started Session 18 of User zuul.
Dec 11 08:42:08 np0005555520 python3.9[78241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:42:10 np0005555520 python3.9[78397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:11 np0005555520 python3.9[78549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:12 np0005555520 python3.9[78701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:13 np0005555520 python3.9[78824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460531.8092318-65-78187409436028/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0cc3994c6c986696e6aee535c2f15494fcde2185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:14 np0005555520 python3.9[78976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:14 np0005555520 python3.9[79099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460533.5706456-65-232121906868781/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0719f708e5a690e324c9a0c52b5bca3fe9c29fc5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:15 np0005555520 python3.9[79251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:16 np0005555520 python3.9[79374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460535.0621521-65-157911971100649/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=28676b1c09af5b2f663d8fa4b789b7cdb07b1568 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:17 np0005555520 python3.9[79526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:17 np0005555520 python3.9[79678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:18 np0005555520 python3.9[79830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:19 np0005555520 python3.9[79953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460538.0475662-124-249611820032458/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=adf9a926846e13e7d12b3df07bfe0ea829dab68a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:19 np0005555520 python3.9[80105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:20 np0005555520 python3.9[80228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460539.439535-124-21554346687124/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0719f708e5a690e324c9a0c52b5bca3fe9c29fc5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:21 np0005555520 python3.9[80380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:21 np0005555520 python3.9[80503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460540.7989824-124-54579690905753/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=99e09fe5f030ce64966f34dbed38e08ea3e89b18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:22 np0005555520 python3.9[80655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:23 np0005555520 python3.9[80807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:24 np0005555520 python3.9[80959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:24 np0005555520 python3.9[81082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460543.6171052-183-270759371736297/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=aa49266790138db25f3dc7dfa4ccabdf8e77b46b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:25 np0005555520 python3.9[81234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:25 np0005555520 python3.9[81357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460544.9182832-183-9287885272588/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c5e0ac3d0de2a7fe1a4f19387e65b142a4683388 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:26 np0005555520 python3.9[81509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:27 np0005555520 python3.9[81632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460546.1599705-183-10833432369398/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=bceeeaf601401b4d5de695845e6782f89f1f70c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:28 np0005555520 python3.9[81784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:28 np0005555520 python3.9[81936]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:29 np0005555520 python3.9[82088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:30 np0005555520 chronyd[65800]: Selected source 216.6.2.70 (pool.ntp.org)
Dec 11 08:42:30 np0005555520 python3.9[82211]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460549.1209989-242-1630848147064/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c4313a938e5b6940ce76df42aafee3ef24ef9fe6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:30 np0005555520 python3.9[82363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:31 np0005555520 python3.9[82486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460550.4193358-242-172363185635834/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=46a92a95ca6a8e875ccee11e623a96d03529da7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:31 np0005555520 python3.9[82638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:32 np0005555520 python3.9[82761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460551.566252-242-105771234051006/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=62fb5f4fbdcdab058e0bd6bdf4b36b648fde9bcf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:33 np0005555520 python3.9[82913]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:34 np0005555520 python3.9[83065]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:34 np0005555520 python3.9[83217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:35 np0005555520 python3.9[83340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460554.3874276-301-101441654020010/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0a8bbba942040b64676e0124f2c0b525e5a41da2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:36 np0005555520 python3.9[83492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:36 np0005555520 python3.9[83615]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460555.6623693-301-15631429215101/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c5e0ac3d0de2a7fe1a4f19387e65b142a4683388 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:37 np0005555520 python3.9[83767]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:38 np0005555520 python3.9[83890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460556.9920545-301-263052020033902/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0e2ea776dfeaa351f919eacb48e59f9b92ac9e9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:39 np0005555520 python3.9[84042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:40 np0005555520 python3.9[84194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:40 np0005555520 python3.9[84317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460559.595249-369-241146515487502/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:41 np0005555520 python3.9[84469]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:42 np0005555520 python3.9[84622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:42 np0005555520 python3.9[84745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460561.7926846-393-52832939997991/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:43 np0005555520 python3.9[84897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:44 np0005555520 python3.9[85049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:45 np0005555520 python3.9[85172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460564.1011536-417-225388036199026/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:46 np0005555520 python3.9[85324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:46 np0005555520 python3.9[85476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:47 np0005555520 python3.9[85599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460566.2507808-441-201681171381075/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:48 np0005555520 python3.9[85751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:48 np0005555520 python3.9[85903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:49 np0005555520 python3.9[86026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460568.25292-465-77002977236327/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:50 np0005555520 python3.9[86178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:51 np0005555520 python3.9[86330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:51 np0005555520 python3.9[86453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460570.5108707-489-263460876969763/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:52 np0005555520 python3.9[86605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:53 np0005555520 python3.9[86758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:54 np0005555520 python3.9[86883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460573.0619686-513-207497658021730/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:54 np0005555520 python3.9[87035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:42:55 np0005555520 python3.9[87187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:42:56 np0005555520 python3.9[87310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460575.0357249-537-263452903799575/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3e1048d83842a22be6299411de826f2ede976d1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:56 np0005555520 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Dec 11 08:42:56 np0005555520 systemd[1]: session-18.scope: Deactivated successfully.
Dec 11 08:42:56 np0005555520 systemd[1]: session-18.scope: Consumed 37.738s CPU time.
Dec 11 08:42:56 np0005555520 systemd-logind[786]: Removed session 18.
Dec 11 08:43:02 np0005555520 systemd-logind[786]: New session 19 of user zuul.
Dec 11 08:43:02 np0005555520 systemd[1]: Started Session 19 of User zuul.
Dec 11 08:43:03 np0005555520 python3.9[87489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:43:05 np0005555520 python3.9[87645]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:05 np0005555520 python3.9[87797]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:06 np0005555520 python3.9[87947]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:43:07 np0005555520 python3.9[88099]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 11 08:43:09 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 11 08:43:09 np0005555520 python3.9[88255]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:43:10 np0005555520 python3.9[88339]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:43:12 np0005555520 python3.9[88492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:43:13 np0005555520 python3[88647]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 11 08:43:14 np0005555520 python3.9[88799]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:15 np0005555520 python3.9[88951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:15 np0005555520 python3.9[89029]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:16 np0005555520 python3.9[89181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:17 np0005555520 python3.9[89259]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.46qw2l4r recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:17 np0005555520 python3.9[89411]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:18 np0005555520 python3.9[89489]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:19 np0005555520 python3.9[89641]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:20 np0005555520 python3[89794]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 08:43:20 np0005555520 python3.9[89946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:21 np0005555520 python3.9[90071]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460600.3020298-157-89559476742324/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:22 np0005555520 python3.9[90223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:23 np0005555520 python3.9[90348]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460601.8911998-172-233620588986918/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:23 np0005555520 python3.9[90500]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:24 np0005555520 python3.9[90625]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460603.2377505-187-113530644036509/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:25 np0005555520 python3.9[90777]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:26 np0005555520 python3.9[90902]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460604.82468-202-208172395477381/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:26 np0005555520 python3.9[91054]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:27 np0005555520 python3.9[91179]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460606.266339-217-263302462792512/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:28 np0005555520 python3.9[91331]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:29 np0005555520 python3.9[91483]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:29 np0005555520 python3.9[91638]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:30 np0005555520 python3.9[91790]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:31 np0005555520 python3.9[91943]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:43:32 np0005555520 python3.9[92097]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:33 np0005555520 python3.9[92252]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:34 np0005555520 python3.9[92402]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:43:35 np0005555520 python3.9[92555]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:cb:58:d7:dd" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:35 np0005555520 ovs-vsctl[92556]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:cb:58:d7:dd external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 11 08:43:36 np0005555520 python3.9[92708]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:37 np0005555520 python3.9[92863]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:43:37 np0005555520 ovs-vsctl[92864]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 11 08:43:37 np0005555520 python3.9[93014]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:43:38 np0005555520 python3.9[93168]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:39 np0005555520 python3.9[93320]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:39 np0005555520 python3.9[93398]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:40 np0005555520 python3.9[93552]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:40 np0005555520 python3.9[93630]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:41 np0005555520 python3.9[93782]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:42 np0005555520 python3.9[93934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:42 np0005555520 python3.9[94012]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:43 np0005555520 python3.9[94164]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:44 np0005555520 python3.9[94242]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:44 np0005555520 python3.9[94394]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:43:44 np0005555520 systemd[1]: Reloading.
Dec 11 08:43:44 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:43:44 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:43:45 np0005555520 python3.9[94583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:46 np0005555520 python3.9[94661]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:47 np0005555520 python3.9[94813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:47 np0005555520 python3.9[94891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:48 np0005555520 python3.9[95043]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:43:48 np0005555520 systemd[1]: Reloading.
Dec 11 08:43:48 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:43:48 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:43:48 np0005555520 systemd[1]: Starting Create netns directory...
Dec 11 08:43:48 np0005555520 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 11 08:43:48 np0005555520 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 11 08:43:48 np0005555520 systemd[1]: Finished Create netns directory.
Dec 11 08:43:49 np0005555520 python3.9[95236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:50 np0005555520 python3.9[95388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:50 np0005555520 python3.9[95511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460629.7528274-468-165058230994509/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:51 np0005555520 python3.9[95663]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:43:52 np0005555520 python3.9[95815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:43:53 np0005555520 python3.9[95938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460631.9019024-493-80345606633690/.source.json _original_basename=.451v58im follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:53 np0005555520 python3.9[96090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:43:56 np0005555520 python3.9[96517]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 11 08:43:57 np0005555520 python3.9[96669]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:43:58 np0005555520 python3.9[96821]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 11 08:43:58 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:43:59 np0005555520 python3[96984]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:43:59 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:43:59 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:43:59 np0005555520 podman[97019]: 2025-12-11 13:43:59.618172235 +0000 UTC m=+0.082067369 container create 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:43:59 np0005555520 podman[97019]: 2025-12-11 13:43:59.579721767 +0000 UTC m=+0.043616961 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 11 08:43:59 np0005555520 python3[96984]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 11 08:44:00 np0005555520 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 08:44:00 np0005555520 python3.9[97208]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:44:01 np0005555520 python3.9[97362]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:01 np0005555520 python3.9[97438]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:44:02 np0005555520 python3.9[97589]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765460641.88966-581-252860461941719/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:03 np0005555520 python3.9[97665]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:44:03 np0005555520 systemd[1]: Reloading.
Dec 11 08:44:03 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:44:03 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:44:03 np0005555520 python3.9[97775]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:44:04 np0005555520 systemd[1]: Reloading.
Dec 11 08:44:04 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:44:04 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:44:04 np0005555520 systemd[1]: Starting ovn_controller container...
Dec 11 08:44:04 np0005555520 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 11 08:44:04 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:44:04 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed9a5dcb1b425fa4109e85cef3c94c4cbcb1590ad07c9f132267ae92d350d26/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 11 08:44:04 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.
Dec 11 08:44:04 np0005555520 podman[97816]: 2025-12-11 13:44:04.421054933 +0000 UTC m=+0.151842427 container init 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + sudo -E kolla_set_configs
Dec 11 08:44:04 np0005555520 podman[97816]: 2025-12-11 13:44:04.45764044 +0000 UTC m=+0.188427904 container start 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 08:44:04 np0005555520 edpm-start-podman-container[97816]: ovn_controller
Dec 11 08:44:04 np0005555520 systemd[1]: Created slice User Slice of UID 0.
Dec 11 08:44:04 np0005555520 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 11 08:44:04 np0005555520 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 11 08:44:04 np0005555520 systemd[1]: Starting User Manager for UID 0...
Dec 11 08:44:04 np0005555520 edpm-start-podman-container[97815]: Creating additional drop-in dependency for "ovn_controller" (8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e)
Dec 11 08:44:04 np0005555520 systemd[1]: Reloading.
Dec 11 08:44:04 np0005555520 podman[97839]: 2025-12-11 13:44:04.595528214 +0000 UTC m=+0.123716186 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 11 08:44:04 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:44:04 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:44:04 np0005555520 systemd[97869]: Queued start job for default target Main User Target.
Dec 11 08:44:04 np0005555520 systemd[97869]: Created slice User Application Slice.
Dec 11 08:44:04 np0005555520 systemd[97869]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 11 08:44:04 np0005555520 systemd[97869]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 08:44:04 np0005555520 systemd[97869]: Reached target Paths.
Dec 11 08:44:04 np0005555520 systemd[97869]: Reached target Timers.
Dec 11 08:44:04 np0005555520 systemd[97869]: Starting D-Bus User Message Bus Socket...
Dec 11 08:44:04 np0005555520 systemd[97869]: Starting Create User's Volatile Files and Directories...
Dec 11 08:44:04 np0005555520 systemd[97869]: Finished Create User's Volatile Files and Directories.
Dec 11 08:44:04 np0005555520 systemd[97869]: Listening on D-Bus User Message Bus Socket.
Dec 11 08:44:04 np0005555520 systemd[97869]: Reached target Sockets.
Dec 11 08:44:04 np0005555520 systemd[97869]: Reached target Basic System.
Dec 11 08:44:04 np0005555520 systemd[97869]: Reached target Main User Target.
Dec 11 08:44:04 np0005555520 systemd[97869]: Startup finished in 191ms.
Dec 11 08:44:04 np0005555520 systemd[1]: Started User Manager for UID 0.
Dec 11 08:44:04 np0005555520 systemd[1]: Started ovn_controller container.
Dec 11 08:44:04 np0005555520 systemd[1]: 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e-be2daa866ad311a.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:44:04 np0005555520 systemd[1]: 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e-be2daa866ad311a.service: Failed with result 'exit-code'.
Dec 11 08:44:04 np0005555520 systemd[1]: Started Session c1 of User root.
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: INFO:__main__:Validating config file
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: INFO:__main__:Writing out command to execute
Dec 11 08:44:04 np0005555520 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: ++ cat /run_command
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + ARGS=
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + sudo kolla_copy_cacerts
Dec 11 08:44:04 np0005555520 systemd[1]: Started Session c2 of User root.
Dec 11 08:44:04 np0005555520 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + [[ ! -n '' ]]
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + . kolla_extend_start
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + umask 0022
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:04Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:04Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:04Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 11 08:44:04 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:04Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0025] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0034] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <warn>  [1765460645.0037] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0044] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0050] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0054] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 11 08:44:05 np0005555520 kernel: br-int: entered promiscuous mode
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 11 08:44:05 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:05Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0311] manager: (ovn-a09662-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 11 08:44:05 np0005555520 systemd-udevd[97991]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:44:05 np0005555520 kernel: genev_sys_6081: entered promiscuous mode
Dec 11 08:44:05 np0005555520 systemd-udevd[97994]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0489] device (genev_sys_6081): carrier: link connected
Dec 11 08:44:05 np0005555520 NetworkManager[56353]: <info>  [1765460645.0492] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec 11 08:44:05 np0005555520 python3.9[98100]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:44:05 np0005555520 ovs-vsctl[98101]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 11 08:44:06 np0005555520 python3.9[98253]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:44:06 np0005555520 ovs-vsctl[98255]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 11 08:44:07 np0005555520 python3.9[98408]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:44:07 np0005555520 ovs-vsctl[98409]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 11 08:44:07 np0005555520 systemd[1]: session-19.scope: Deactivated successfully.
Dec 11 08:44:07 np0005555520 systemd[1]: session-19.scope: Consumed 48.241s CPU time.
Dec 11 08:44:07 np0005555520 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Dec 11 08:44:07 np0005555520 systemd-logind[786]: Removed session 19.
Dec 11 08:44:13 np0005555520 systemd-logind[786]: New session 21 of user zuul.
Dec 11 08:44:13 np0005555520 systemd[1]: Started Session 21 of User zuul.
Dec 11 08:44:14 np0005555520 python3.9[98587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:44:15 np0005555520 systemd[1]: Stopping User Manager for UID 0...
Dec 11 08:44:15 np0005555520 systemd[97869]: Activating special unit Exit the Session...
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped target Main User Target.
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped target Basic System.
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped target Paths.
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped target Sockets.
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped target Timers.
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 11 08:44:15 np0005555520 systemd[97869]: Closed D-Bus User Message Bus Socket.
Dec 11 08:44:15 np0005555520 systemd[97869]: Stopped Create User's Volatile Files and Directories.
Dec 11 08:44:15 np0005555520 systemd[97869]: Removed slice User Application Slice.
Dec 11 08:44:15 np0005555520 systemd[97869]: Reached target Shutdown.
Dec 11 08:44:15 np0005555520 systemd[97869]: Finished Exit the Session.
Dec 11 08:44:15 np0005555520 systemd[97869]: Reached target Exit the Session.
Dec 11 08:44:15 np0005555520 systemd[1]: user@0.service: Deactivated successfully.
Dec 11 08:44:15 np0005555520 systemd[1]: Stopped User Manager for UID 0.
Dec 11 08:44:15 np0005555520 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 11 08:44:15 np0005555520 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 11 08:44:15 np0005555520 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 11 08:44:15 np0005555520 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 11 08:44:15 np0005555520 systemd[1]: Removed slice User Slice of UID 0.
Dec 11 08:44:15 np0005555520 python3.9[98746]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:16 np0005555520 python3.9[98898]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:16 np0005555520 python3.9[99050]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:17 np0005555520 python3.9[99202]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:18 np0005555520 python3.9[99354]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:18 np0005555520 python3.9[99504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:44:19 np0005555520 python3.9[99656]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 11 08:44:21 np0005555520 python3.9[99806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:21 np0005555520 python3.9[99927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460660.6035619-86-177364306802969/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:22 np0005555520 python3.9[100077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:23 np0005555520 python3.9[100198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460662.140844-101-122134469947292/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:24 np0005555520 python3.9[100351]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:44:25 np0005555520 python3.9[100435]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:44:27 np0005555520 python3.9[100590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:44:28 np0005555520 python3.9[100743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:29 np0005555520 python3.9[100864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460667.941849-138-92302927690095/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:29 np0005555520 python3.9[101014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:30 np0005555520 python3.9[101135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460669.2783928-138-160178435181499/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:31 np0005555520 python3.9[101285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:32 np0005555520 python3.9[101406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460671.1402745-182-165140176122750/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:32 np0005555520 python3.9[101556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:33 np0005555520 python3.9[101677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460672.5219278-182-110707053042641/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:34 np0005555520 python3.9[101827]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:44:34 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:34Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Dec 11 08:44:34 np0005555520 ovn_controller[97832]: 2025-12-11T13:44:34Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 11 08:44:34 np0005555520 podman[101981]: 2025-12-11 13:44:34.993597369 +0000 UTC m=+0.119871352 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 08:44:35 np0005555520 python3.9[101982]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:35 np0005555520 python3.9[102158]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:36 np0005555520 python3.9[102236]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:36 np0005555520 python3.9[102388]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:37 np0005555520 python3.9[102466]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:37 np0005555520 python3.9[102618]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:38 np0005555520 python3.9[102770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:39 np0005555520 python3.9[102848]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:39 np0005555520 python3.9[103000]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:40 np0005555520 python3.9[103078]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:41 np0005555520 python3.9[103230]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:44:41 np0005555520 systemd[1]: Reloading.
Dec 11 08:44:41 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:44:41 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:44:42 np0005555520 python3.9[103418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:42 np0005555520 python3.9[103496]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:43 np0005555520 python3.9[103648]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:43 np0005555520 python3.9[103726]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:44 np0005555520 python3.9[103878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:44:44 np0005555520 systemd[1]: Reloading.
Dec 11 08:44:44 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:44:44 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:44:45 np0005555520 systemd[1]: Starting Create netns directory...
Dec 11 08:44:45 np0005555520 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 11 08:44:45 np0005555520 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 11 08:44:45 np0005555520 systemd[1]: Finished Create netns directory.
Dec 11 08:44:46 np0005555520 python3.9[104070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:46 np0005555520 python3.9[104222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:47 np0005555520 python3.9[104345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765460686.2234-333-57361986383762/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:48 np0005555520 python3.9[104497]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:44:49 np0005555520 python3.9[104649]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:44:49 np0005555520 python3.9[104772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460688.466744-358-273627254587349/.source.json _original_basename=._xr9yrnt follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:50 np0005555520 python3.9[104924]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:52 np0005555520 python3.9[105351]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 11 08:44:54 np0005555520 python3.9[105503]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:44:54 np0005555520 python3.9[105655]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 11 08:44:56 np0005555520 python3[105833]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:44:56 np0005555520 podman[105868]: 2025-12-11 13:44:56.613471753 +0000 UTC m=+0.083907804 container create 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 11 08:44:56 np0005555520 podman[105868]: 2025-12-11 13:44:56.558921666 +0000 UTC m=+0.029357747 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 08:44:56 np0005555520 python3[105833]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 08:44:57 np0005555520 python3.9[106056]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:44:58 np0005555520 python3.9[106210]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:59 np0005555520 python3.9[106286]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:44:59 np0005555520 python3.9[106437]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765460699.1109855-446-196556418196974/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:00 np0005555520 python3.9[106513]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:45:00 np0005555520 systemd[1]: Reloading.
Dec 11 08:45:00 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:45:00 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:45:01 np0005555520 python3.9[106624]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:01 np0005555520 systemd[1]: Reloading.
Dec 11 08:45:01 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:45:01 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:45:01 np0005555520 systemd[1]: Starting ovn_metadata_agent container...
Dec 11 08:45:01 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:45:01 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58a1c1ea6b3637a1f40bba8dd8e6dc41007169d365de2fdcc6d021f0b874a60a/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 11 08:45:01 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58a1c1ea6b3637a1f40bba8dd8e6dc41007169d365de2fdcc6d021f0b874a60a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 08:45:01 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.
Dec 11 08:45:01 np0005555520 podman[106665]: 2025-12-11 13:45:01.892746798 +0000 UTC m=+0.166604453 container init 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: + sudo -E kolla_set_configs
Dec 11 08:45:01 np0005555520 podman[106665]: 2025-12-11 13:45:01.931940092 +0000 UTC m=+0.205797727 container start 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 11 08:45:01 np0005555520 edpm-start-podman-container[106665]: ovn_metadata_agent
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Validating config file
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Copying service configuration files
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Writing out command to execute
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: ++ cat /run_command
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: + CMD=neutron-ovn-metadata-agent
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: + ARGS=
Dec 11 08:45:01 np0005555520 ovn_metadata_agent[106681]: + sudo kolla_copy_cacerts
Dec 11 08:45:02 np0005555520 edpm-start-podman-container[106664]: Creating additional drop-in dependency for "ovn_metadata_agent" (11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca)
Dec 11 08:45:02 np0005555520 ovn_metadata_agent[106681]: + [[ ! -n '' ]]
Dec 11 08:45:02 np0005555520 ovn_metadata_agent[106681]: + . kolla_extend_start
Dec 11 08:45:02 np0005555520 ovn_metadata_agent[106681]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 11 08:45:02 np0005555520 ovn_metadata_agent[106681]: Running command: 'neutron-ovn-metadata-agent'
Dec 11 08:45:02 np0005555520 ovn_metadata_agent[106681]: + umask 0022
Dec 11 08:45:02 np0005555520 ovn_metadata_agent[106681]: + exec neutron-ovn-metadata-agent
Dec 11 08:45:02 np0005555520 podman[106688]: 2025-12-11 13:45:02.028931923 +0000 UTC m=+0.083404359 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 08:45:02 np0005555520 systemd[1]: Reloading.
Dec 11 08:45:02 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:45:02 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:45:02 np0005555520 systemd[1]: Started ovn_metadata_agent container.
Dec 11 08:45:02 np0005555520 systemd[1]: session-21.scope: Deactivated successfully.
Dec 11 08:45:02 np0005555520 systemd[1]: session-21.scope: Consumed 36.690s CPU time.
Dec 11 08:45:02 np0005555520 systemd-logind[786]: Session 21 logged out. Waiting for processes to exit.
Dec 11 08:45:02 np0005555520 systemd-logind[786]: Removed session 21.
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.011 106686 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.011 106686 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.011 106686 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.012 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.012 106686 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.012 106686 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.012 106686 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.013 106686 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.014 106686 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.015 106686 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.016 106686 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.017 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.018 106686 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.019 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.020 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.021 106686 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.022 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.023 106686 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.024 106686 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.025 106686 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.026 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.027 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.028 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.028 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.028 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.028 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.028 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.028 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.029 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.030 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.031 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.032 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.033 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.034 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.035 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.036 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.037 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.038 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.039 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.040 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.041 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.042 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.043 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.044 106686 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.045 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.046 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.047 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.048 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.049 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.049 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.049 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.049 106686 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.049 106686 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.059 106686 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.060 106686 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.060 106686 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.060 106686 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.061 106686 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.075 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 91d1351c-e9c8-4a9c-80fe-965b575ecbf6 (UUID: 91d1351c-e9c8-4a9c-80fe-965b575ecbf6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.101 106686 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.101 106686 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.101 106686 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.101 106686 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.105 106686 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.111 106686 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.117 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '91d1351c-e9c8-4a9c-80fe-965b575ecbf6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], external_ids={}, name=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, nb_cfg_timestamp=1765460653033, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.118 106686 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5fb511f130>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.119 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.119 106686 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.120 106686 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.120 106686 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.125 106686 DEBUG oslo_service.service [-] Started child 106794 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.128 106794 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-165691'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.128 106686 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp2o1koz4h/privsep.sock']#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.150 106794 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.151 106794 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.151 106794 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.154 106794 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.160 106794 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.166 106794 INFO eventlet.wsgi.server [-] (106794) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec 11 08:45:04 np0005555520 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.855 106686 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.856 106686 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2o1koz4h/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.688 106799 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.693 106799 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.695 106799 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.695 106799 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106799#033[00m
Dec 11 08:45:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:04.858 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[6809b9ac-6c7d-443b-a7a0-55c990a3bb87]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 08:45:05 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:05.374 106799 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:45:05 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:05.374 106799 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:45:05 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:05.375 106799 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:45:05 np0005555520 podman[106804]: 2025-12-11 13:45:05.529943103 +0000 UTC m=+0.127427119 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 11 08:45:05 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:05.978 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbd693c-fb47-42ec-b142-28ca312a9ff9]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 08:45:05 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:05.982 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, column=external_ids, values=({'neutron:ovn-metadata-id': '7c39ac7f-1581-5040-b6b6-07f6347ebbe6'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 08:45:05 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:05.998 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.005 106686 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.005 106686 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.005 106686 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.005 106686 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.005 106686 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.005 106686 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.006 106686 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.006 106686 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.006 106686 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.006 106686 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.006 106686 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.006 106686 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.007 106686 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.008 106686 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.008 106686 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.008 106686 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.008 106686 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.008 106686 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.008 106686 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.009 106686 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.009 106686 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.009 106686 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.009 106686 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.009 106686 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.009 106686 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.010 106686 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.010 106686 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.010 106686 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.010 106686 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.010 106686 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.010 106686 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.011 106686 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.011 106686 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.011 106686 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.011 106686 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.011 106686 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.011 106686 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.012 106686 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.013 106686 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.014 106686 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.015 106686 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.016 106686 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.017 106686 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.018 106686 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.019 106686 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.020 106686 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.021 106686 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.021 106686 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.021 106686 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.021 106686 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.021 106686 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.021 106686 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.022 106686 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.023 106686 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.023 106686 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.023 106686 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.023 106686 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.023 106686 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.023 106686 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.024 106686 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.025 106686 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.026 106686 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.027 106686 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.028 106686 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.029 106686 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.030 106686 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.031 106686 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.032 106686 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.033 106686 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.034 106686 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.035 106686 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.036 106686 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.037 106686 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.038 106686 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.039 106686 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.039 106686 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.039 106686 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.039 106686 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.039 106686 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.039 106686 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.040 106686 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.041 106686 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.042 106686 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.043 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.044 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.045 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.046 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.047 106686 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:45:06 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:45:06.048 106686 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 11 08:45:07 np0005555520 systemd-logind[786]: New session 22 of user zuul.
Dec 11 08:45:07 np0005555520 systemd[1]: Started Session 22 of User zuul.
Dec 11 08:45:08 np0005555520 python3.9[106983]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:45:09 np0005555520 python3.9[107139]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:11 np0005555520 python3.9[107304]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:45:11 np0005555520 systemd[1]: Reloading.
Dec 11 08:45:11 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:45:11 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:45:12 np0005555520 python3.9[107489]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:45:12 np0005555520 network[107506]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:45:12 np0005555520 network[107507]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:45:12 np0005555520 network[107508]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:45:16 np0005555520 python3.9[107771]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:17 np0005555520 python3.9[107924]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:18 np0005555520 python3.9[108077]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:18 np0005555520 python3.9[108230]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:20 np0005555520 python3.9[108383]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:21 np0005555520 python3.9[108536]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:22 np0005555520 python3.9[108689]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:45:23 np0005555520 python3.9[108842]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:24 np0005555520 python3.9[108994]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:24 np0005555520 python3.9[109146]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:25 np0005555520 python3.9[109298]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:26 np0005555520 python3.9[109451]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:26 np0005555520 python3.9[109603]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:27 np0005555520 python3.9[109755]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:28 np0005555520 python3.9[109907]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:28 np0005555520 python3.9[110059]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:29 np0005555520 python3.9[110211]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:30 np0005555520 python3.9[110363]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:30 np0005555520 python3.9[110515]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:31 np0005555520 python3.9[110667]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:32 np0005555520 python3.9[110819]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:45:32 np0005555520 podman[110844]: 2025-12-11 13:45:32.475870091 +0000 UTC m=+0.072174276 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 08:45:33 np0005555520 python3.9[110991]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:34 np0005555520 python3.9[111143]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:45:34 np0005555520 python3.9[111295]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:45:34 np0005555520 systemd[1]: Reloading.
Dec 11 08:45:35 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:45:35 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:45:35 np0005555520 podman[111454]: 2025-12-11 13:45:35.842001684 +0000 UTC m=+0.120307563 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 08:45:35 np0005555520 python3.9[111499]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:36 np0005555520 python3.9[111662]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:37 np0005555520 python3.9[111815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:38 np0005555520 python3.9[111968]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:38 np0005555520 python3.9[112121]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:39 np0005555520 python3.9[112274]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:40 np0005555520 python3.9[112427]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:45:41 np0005555520 python3.9[112580]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 11 08:45:42 np0005555520 python3.9[112733]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 08:45:43 np0005555520 python3.9[112891]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 08:45:44 np0005555520 python3.9[113051]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:45:45 np0005555520 python3.9[113135]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:46:03 np0005555520 podman[113325]: 2025-12-11 13:46:03.486697192 +0000 UTC m=+0.074650096 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 08:46:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:46:04.052 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:46:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:46:04.053 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:46:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:46:04.054 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:46:06 np0005555520 podman[113347]: 2025-12-11 13:46:06.63211376 +0000 UTC m=+0.141481869 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 11 08:46:15 np0005555520 kernel: SELinux:  Converting 2759 SID table entries...
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:46:15 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:46:26 np0005555520 kernel: SELinux:  Converting 2759 SID table entries...
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:46:26 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:46:34 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 11 08:46:34 np0005555520 podman[113388]: 2025-12-11 13:46:34.47991282 +0000 UTC m=+0.068620261 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 08:46:37 np0005555520 podman[113407]: 2025-12-11 13:46:37.501469941 +0000 UTC m=+0.108325759 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 08:47:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:47:04.054 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:47:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:47:04.054 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:47:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:47:04.054 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:47:05 np0005555520 podman[128907]: 2025-12-11 13:47:05.472032698 +0000 UTC m=+0.064947415 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 11 08:47:08 np0005555520 podman[130240]: 2025-12-11 13:47:08.508511207 +0000 UTC m=+0.113512031 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 11 08:47:23 np0005555520 kernel: SELinux:  Converting 2760 SID table entries...
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability open_perms=1
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:47:23 np0005555520 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:47:25 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:47:25 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 11 08:47:25 np0005555520 dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Dec 11 08:47:33 np0005555520 systemd[1]: Stopping OpenSSH server daemon...
Dec 11 08:47:33 np0005555520 systemd[1]: sshd.service: Deactivated successfully.
Dec 11 08:47:33 np0005555520 systemd[1]: Stopped OpenSSH server daemon.
Dec 11 08:47:33 np0005555520 systemd[1]: sshd.service: Consumed 2.056s CPU time, read 32.0K from disk, written 0B to disk.
Dec 11 08:47:33 np0005555520 systemd[1]: Stopped target sshd-keygen.target.
Dec 11 08:47:33 np0005555520 systemd[1]: Stopping sshd-keygen.target...
Dec 11 08:47:33 np0005555520 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:47:33 np0005555520 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:47:33 np0005555520 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:47:33 np0005555520 systemd[1]: Reached target sshd-keygen.target.
Dec 11 08:47:33 np0005555520 systemd[1]: Starting OpenSSH server daemon...
Dec 11 08:47:33 np0005555520 systemd[1]: Started OpenSSH server daemon.
Dec 11 08:47:35 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:47:35 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:47:35 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:35 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:35 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:35 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:47:35 np0005555520 podman[131349]: 2025-12-11 13:47:35.797921323 +0000 UTC m=+0.085332035 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 08:47:38 np0005555520 podman[134744]: 2025-12-11 13:47:38.986174758 +0000 UTC m=+0.087861348 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 11 08:47:40 np0005555520 python3.9[135699]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:47:40 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:40 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:40 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:41 np0005555520 python3.9[136925]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:47:41 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:41 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:41 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:42 np0005555520 python3.9[138119]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:47:42 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:42 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:42 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:43 np0005555520 python3.9[139465]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:47:43 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:43 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:43 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:44 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:47:44 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:47:44 np0005555520 systemd[1]: man-db-cache-update.service: Consumed 11.181s CPU time.
Dec 11 08:47:44 np0005555520 systemd[1]: run-r43b4554e0a554fdd9971b0e994b845b5.service: Deactivated successfully.
Dec 11 08:47:44 np0005555520 python3.9[140649]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:44 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:44 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:44 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:45 np0005555520 python3.9[140840]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:45 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:45 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:45 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:46 np0005555520 python3.9[141031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:46 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:46 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:46 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:47 np0005555520 python3.9[141221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:48 np0005555520 python3.9[141376]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:48 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:48 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:48 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:50 np0005555520 python3.9[141566]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 08:47:50 np0005555520 systemd[1]: Reloading.
Dec 11 08:47:50 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:47:50 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:47:51 np0005555520 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 11 08:47:51 np0005555520 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 11 08:47:51 np0005555520 python3.9[141759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:52 np0005555520 python3.9[141914]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:53 np0005555520 python3.9[142069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:54 np0005555520 python3.9[142224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:55 np0005555520 python3.9[142379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:56 np0005555520 python3.9[142534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:57 np0005555520 python3.9[142689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:57 np0005555520 python3.9[142844]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:58 np0005555520 python3.9[142999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:47:59 np0005555520 python3.9[143154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:48:00 np0005555520 python3.9[143309]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:48:01 np0005555520 python3.9[143464]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:48:02 np0005555520 python3.9[143619]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:48:03 np0005555520 python3.9[143774]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 11 08:48:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:48:04.055 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:48:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:48:04.056 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:48:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:48:04.056 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:48:04 np0005555520 python3.9[143929]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:48:04 np0005555520 python3.9[144081]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:48:05 np0005555520 python3.9[144233]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:48:06 np0005555520 podman[144357]: 2025-12-11 13:48:06.039248876 +0000 UTC m=+0.109227993 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 08:48:06 np0005555520 python3.9[144400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:48:06 np0005555520 python3.9[144556]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:48:07 np0005555520 python3.9[144708]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:48:08 np0005555520 python3.9[144860]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:09 np0005555520 podman[144957]: 2025-12-11 13:48:09.277827485 +0000 UTC m=+0.120284134 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:48:09 np0005555520 python3.9[145004]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460887.8373845-554-16570351477921/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:10 np0005555520 python3.9[145162]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:10 np0005555520 python3.9[145287]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460889.573145-554-258162293307902/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:11 np0005555520 python3.9[145439]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:12 np0005555520 python3.9[145564]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460890.9317327-554-103374351657870/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:12 np0005555520 python3.9[145716]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:13 np0005555520 python3.9[145841]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460892.2383485-554-241493957859189/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:14 np0005555520 python3.9[145993]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:14 np0005555520 python3.9[146118]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460893.5524316-554-266628618693071/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:15 np0005555520 python3.9[146270]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:15 np0005555520 python3.9[146395]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460894.8758817-554-126465843766951/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:16 np0005555520 python3.9[146547]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:17 np0005555520 python3.9[146670]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460896.0738237-554-247971009356381/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:17 np0005555520 python3.9[146822]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:18 np0005555520 python3.9[146947]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765460897.316162-554-219439707672869/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:19 np0005555520 python3.9[147099]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 11 08:48:20 np0005555520 python3.9[147252]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:20 np0005555520 python3.9[147404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:21 np0005555520 python3.9[147556]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:22 np0005555520 python3.9[147708]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:22 np0005555520 python3.9[147860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:23 np0005555520 python3.9[148012]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:24 np0005555520 python3.9[148164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:24 np0005555520 python3.9[148316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:25 np0005555520 python3.9[148468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:26 np0005555520 python3.9[148622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:26 np0005555520 python3.9[148774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:27 np0005555520 python3.9[148926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:28 np0005555520 python3.9[149078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:28 np0005555520 python3.9[149230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:29 np0005555520 python3.9[149382]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:30 np0005555520 python3.9[149505]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460909.136596-775-212939400519378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:30 np0005555520 python3.9[149657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:31 np0005555520 python3.9[149780]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460910.373715-775-245099192989386/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:32 np0005555520 python3.9[149932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:32 np0005555520 python3.9[150055]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460911.6353724-775-1947965651090/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:33 np0005555520 python3.9[150207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:34 np0005555520 python3.9[150330]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460913.0087152-775-5732741490060/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:34 np0005555520 python3.9[150482]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:35 np0005555520 python3.9[150605]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460914.236886-775-243887334621774/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:35 np0005555520 python3.9[150757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:36 np0005555520 podman[150852]: 2025-12-11 13:48:36.361701176 +0000 UTC m=+0.053976552 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:48:36 np0005555520 python3.9[150898]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460915.4736898-775-20286983866454/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:37 np0005555520 python3.9[151050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:37 np0005555520 python3.9[151173]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460916.7201908-775-266356057901489/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:38 np0005555520 python3.9[151325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:39 np0005555520 python3.9[151448]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460917.978654-775-246515246139350/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:39 np0005555520 podman[151499]: 2025-12-11 13:48:39.504135114 +0000 UTC m=+0.101695543 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 11 08:48:39 np0005555520 python3.9[151628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:40 np0005555520 python3.9[151751]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460919.3117063-775-243316334134939/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:41 np0005555520 python3.9[151903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:41 np0005555520 python3.9[152026]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460920.6470015-775-156195313554924/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:42 np0005555520 python3.9[152178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:42 np0005555520 python3.9[152301]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460921.9283407-775-82354500453329/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:43 np0005555520 python3.9[152453]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:44 np0005555520 python3.9[152576]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460923.162946-775-262222855824533/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:45 np0005555520 python3.9[152728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:45 np0005555520 python3.9[152851]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460924.558686-775-149628998275125/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:46 np0005555520 python3.9[153003]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:48:46 np0005555520 python3.9[153126]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460925.8964083-775-123732374050629/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:47 np0005555520 python3.9[153276]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:48:48 np0005555520 python3.9[153431]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 11 08:48:50 np0005555520 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 11 08:48:50 np0005555520 python3.9[153587]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:51 np0005555520 python3.9[153739]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:51 np0005555520 python3.9[153891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:52 np0005555520 python3.9[154043]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:52 np0005555520 python3.9[154195]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:53 np0005555520 python3.9[154347]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:54 np0005555520 python3.9[154499]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:55 np0005555520 python3.9[154651]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:55 np0005555520 python3.9[154803]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:56 np0005555520 python3.9[154955]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:48:57 np0005555520 python3.9[155107]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:48:57 np0005555520 systemd[1]: Reloading.
Dec 11 08:48:57 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:48:57 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:48:57 np0005555520 systemd[1]: Starting libvirt logging daemon socket...
Dec 11 08:48:57 np0005555520 systemd[1]: Listening on libvirt logging daemon socket.
Dec 11 08:48:57 np0005555520 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 11 08:48:57 np0005555520 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 11 08:48:57 np0005555520 systemd[1]: Starting libvirt logging daemon...
Dec 11 08:48:57 np0005555520 systemd[1]: Started libvirt logging daemon.
Dec 11 08:48:58 np0005555520 python3.9[155300]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:48:58 np0005555520 systemd[1]: Reloading.
Dec 11 08:48:58 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:48:58 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:48:58 np0005555520 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 11 08:48:58 np0005555520 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 11 08:48:58 np0005555520 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 11 08:48:58 np0005555520 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 11 08:48:58 np0005555520 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 11 08:48:58 np0005555520 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 11 08:48:58 np0005555520 systemd[1]: Starting libvirt nodedev daemon...
Dec 11 08:48:58 np0005555520 systemd[1]: Started libvirt nodedev daemon.
Dec 11 08:48:59 np0005555520 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 11 08:48:59 np0005555520 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 11 08:48:59 np0005555520 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 11 08:48:59 np0005555520 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 11 08:48:59 np0005555520 python3.9[155517]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:48:59 np0005555520 systemd[1]: Reloading.
Dec 11 08:48:59 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:48:59 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:49:00 np0005555520 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 11 08:49:00 np0005555520 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 11 08:49:00 np0005555520 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 11 08:49:00 np0005555520 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 11 08:49:00 np0005555520 systemd[1]: Starting libvirt proxy daemon...
Dec 11 08:49:00 np0005555520 systemd[1]: Started libvirt proxy daemon.
Dec 11 08:49:00 np0005555520 setroubleshoot[155389]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3c2e9618-02cf-4151-bbca-b92dae50276e
Dec 11 08:49:00 np0005555520 setroubleshoot[155389]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec 11 08:49:00 np0005555520 setroubleshoot[155389]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3c2e9618-02cf-4151-bbca-b92dae50276e
Dec 11 08:49:00 np0005555520 setroubleshoot[155389]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec 11 08:49:00 np0005555520 python3.9[155738]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:49:00 np0005555520 systemd[1]: Reloading.
Dec 11 08:49:01 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:49:01 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:49:01 np0005555520 systemd[1]: Listening on libvirt locking daemon socket.
Dec 11 08:49:01 np0005555520 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 11 08:49:01 np0005555520 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 11 08:49:01 np0005555520 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 11 08:49:01 np0005555520 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 11 08:49:01 np0005555520 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 11 08:49:01 np0005555520 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 11 08:49:01 np0005555520 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 11 08:49:01 np0005555520 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 11 08:49:01 np0005555520 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 11 08:49:01 np0005555520 systemd[1]: Starting libvirt QEMU daemon...
Dec 11 08:49:01 np0005555520 systemd[1]: Started libvirt QEMU daemon.
Dec 11 08:49:02 np0005555520 python3.9[155953]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:49:02 np0005555520 systemd[1]: Reloading.
Dec 11 08:49:02 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:49:02 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:49:02 np0005555520 systemd[1]: Starting libvirt secret daemon socket...
Dec 11 08:49:02 np0005555520 systemd[1]: Listening on libvirt secret daemon socket.
Dec 11 08:49:02 np0005555520 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 11 08:49:02 np0005555520 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 11 08:49:02 np0005555520 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 11 08:49:02 np0005555520 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 11 08:49:02 np0005555520 systemd[1]: Starting libvirt secret daemon...
Dec 11 08:49:02 np0005555520 systemd[1]: Started libvirt secret daemon.
Dec 11 08:49:03 np0005555520 python3.9[156164]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:49:04.056 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:49:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:49:04.058 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:49:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:49:04.059 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:49:04 np0005555520 python3.9[156316]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:49:05 np0005555520 python3.9[156468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:05 np0005555520 python3.9[156591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460944.6716568-1120-104488943717291/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:06 np0005555520 podman[156616]: 2025-12-11 13:49:06.492859783 +0000 UTC m=+0.080797695 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:49:07 np0005555520 python3.9[156762]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:08 np0005555520 python3.9[156914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:08 np0005555520 python3.9[156992]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:09 np0005555520 python3.9[157144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:09 np0005555520 podman[157222]: 2025-12-11 13:49:09.65957598 +0000 UTC m=+0.107049586 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 08:49:09 np0005555520 python3.9[157223]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.46uklu93 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:10 np0005555520 python3.9[157402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:10 np0005555520 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 11 08:49:10 np0005555520 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.066s CPU time.
Dec 11 08:49:10 np0005555520 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 11 08:49:11 np0005555520 python3.9[157481]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:11 np0005555520 python3.9[157633]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:49:12 np0005555520 python3[157786]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 08:49:13 np0005555520 python3.9[157940]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:13 np0005555520 python3.9[158018]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:14 np0005555520 python3.9[158170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:15 np0005555520 python3.9[158248]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:15 np0005555520 python3.9[158400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:16 np0005555520 python3.9[158478]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:17 np0005555520 python3.9[158630]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:17 np0005555520 python3.9[158708]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:18 np0005555520 python3.9[158860]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:19 np0005555520 python3.9[158985]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765460957.9059494-1245-88850207231561/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:19 np0005555520 python3.9[159137]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:20 np0005555520 python3.9[159289]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:49:21 np0005555520 python3.9[159444]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:22 np0005555520 python3.9[159596]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:49:22 np0005555520 python3.9[159749]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:49:23 np0005555520 python3.9[159903]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:49:24 np0005555520 python3.9[160058]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:25 np0005555520 python3.9[160210]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:25 np0005555520 python3.9[160333]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460964.4604456-1317-243184419433499/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:26 np0005555520 python3.9[160485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:27 np0005555520 python3.9[160608]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460965.793808-1332-213453333812986/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:27 np0005555520 python3.9[160760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:49:28 np0005555520 python3.9[160883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765460967.4143522-1347-964569584568/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:29 np0005555520 python3.9[161035]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:49:29 np0005555520 systemd[1]: Reloading.
Dec 11 08:49:29 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:49:29 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:49:30 np0005555520 systemd[1]: Reached target edpm_libvirt.target.
Dec 11 08:49:31 np0005555520 python3.9[161226]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 11 08:49:31 np0005555520 systemd[1]: Reloading.
Dec 11 08:49:31 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:49:31 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:49:31 np0005555520 systemd[1]: Reloading.
Dec 11 08:49:31 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:49:31 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:49:32 np0005555520 systemd[1]: session-22.scope: Deactivated successfully.
Dec 11 08:49:32 np0005555520 systemd[1]: session-22.scope: Consumed 3min 38.341s CPU time.
Dec 11 08:49:32 np0005555520 systemd-logind[786]: Session 22 logged out. Waiting for processes to exit.
Dec 11 08:49:32 np0005555520 systemd-logind[786]: Removed session 22.
Dec 11 08:49:37 np0005555520 podman[161323]: 2025-12-11 13:49:37.477768307 +0000 UTC m=+0.073577426 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 08:49:40 np0005555520 podman[161342]: 2025-12-11 13:49:40.520978372 +0000 UTC m=+0.112396938 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 11 08:49:44 np0005555520 systemd-logind[786]: New session 23 of user zuul.
Dec 11 08:49:44 np0005555520 systemd[1]: Started Session 23 of User zuul.
Dec 11 08:49:45 np0005555520 python3.9[161522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:49:46 np0005555520 python3.9[161676]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:49:47 np0005555520 network[161693]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:49:47 np0005555520 network[161694]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:49:47 np0005555520 network[161695]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:49:54 np0005555520 python3.9[161966]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:49:55 np0005555520 python3.9[162050]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:50:01 np0005555520 python3.9[162205]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:02 np0005555520 python3.9[162357]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:50:03 np0005555520 python3.9[162510]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:50:04.058 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:50:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:50:04.059 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:50:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:50:04.060 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:50:04 np0005555520 python3.9[162662]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:50:04 np0005555520 python3.9[162815]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:05 np0005555520 python3.9[162938]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461004.4422984-95-171870542691791/.source.iscsi _original_basename=.x8y7dnmb follow=False checksum=5a9dda11650dcf156d99f919a392768fd08b479e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:06 np0005555520 python3.9[163090]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:07 np0005555520 python3.9[163242]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:07 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:50:07 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:50:08 np0005555520 podman[163343]: 2025-12-11 13:50:08.465697043 +0000 UTC m=+0.062948365 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 11 08:50:08 np0005555520 python3.9[163415]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:50:08 np0005555520 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 11 08:50:09 np0005555520 python3.9[163571]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:50:09 np0005555520 systemd[1]: Reloading.
Dec 11 08:50:10 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:50:10 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:50:10 np0005555520 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 11 08:50:10 np0005555520 systemd[1]: Starting Open-iSCSI...
Dec 11 08:50:10 np0005555520 kernel: Loading iSCSI transport class v2.0-870.
Dec 11 08:50:10 np0005555520 systemd[1]: Started Open-iSCSI.
Dec 11 08:50:10 np0005555520 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 11 08:50:10 np0005555520 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 11 08:50:11 np0005555520 podman[163744]: 2025-12-11 13:50:11.141515407 +0000 UTC m=+0.178196886 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:50:11 np0005555520 python3.9[163794]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:50:11 np0005555520 network[163814]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:50:11 np0005555520 network[163815]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:50:11 np0005555520 network[163816]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:50:16 np0005555520 python3.9[164087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 11 08:50:17 np0005555520 python3.9[164239]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 11 08:50:18 np0005555520 python3.9[164395]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:18 np0005555520 python3.9[164518]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461017.4488666-172-114342602318051/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:19 np0005555520 python3.9[164670]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:20 np0005555520 python3.9[164822]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:50:20 np0005555520 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 11 08:50:20 np0005555520 systemd[1]: Stopped Load Kernel Modules.
Dec 11 08:50:20 np0005555520 systemd[1]: Stopping Load Kernel Modules...
Dec 11 08:50:20 np0005555520 systemd[1]: Starting Load Kernel Modules...
Dec 11 08:50:20 np0005555520 systemd[1]: Finished Load Kernel Modules.
Dec 11 08:50:21 np0005555520 python3.9[164978]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:22 np0005555520 python3.9[165130]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:23 np0005555520 python3.9[165282]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:23 np0005555520 python3.9[165434]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:24 np0005555520 python3.9[165557]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461023.2076771-230-104855528843045/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:25 np0005555520 python3.9[165709]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:50:25 np0005555520 python3.9[165862]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:26 np0005555520 python3.9[166014]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:27 np0005555520 python3.9[166166]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:27 np0005555520 python3.9[166318]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:28 np0005555520 python3.9[166470]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:29 np0005555520 python3.9[166622]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:29 np0005555520 python3.9[166774]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:30 np0005555520 python3.9[166926]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:31 np0005555520 python3.9[167080]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:32 np0005555520 python3.9[167232]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:32 np0005555520 python3.9[167384]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:33 np0005555520 python3.9[167462]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:33 np0005555520 python3.9[167614]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:34 np0005555520 python3.9[167692]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:35 np0005555520 python3.9[167844]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:35 np0005555520 python3.9[167996]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:36 np0005555520 python3.9[168074]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:36 np0005555520 python3.9[168226]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:37 np0005555520 python3.9[168304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:38 np0005555520 python3.9[168456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:50:38 np0005555520 systemd[1]: Reloading.
Dec 11 08:50:38 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:50:38 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:50:38 np0005555520 podman[168492]: 2025-12-11 13:50:38.701119952 +0000 UTC m=+0.073404270 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 11 08:50:39 np0005555520 python3.9[168663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:39 np0005555520 python3.9[168741]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:40 np0005555520 python3.9[168893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:41 np0005555520 python3.9[168971]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:41 np0005555520 podman[169069]: 2025-12-11 13:50:41.496798694 +0000 UTC m=+0.087144329 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 08:50:41 np0005555520 python3.9[169149]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:50:41 np0005555520 systemd[1]: Reloading.
Dec 11 08:50:41 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:50:41 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:50:42 np0005555520 systemd[1]: Starting Create netns directory...
Dec 11 08:50:42 np0005555520 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 11 08:50:42 np0005555520 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 11 08:50:42 np0005555520 systemd[1]: Finished Create netns directory.
Dec 11 08:50:43 np0005555520 python3.9[169343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:43 np0005555520 python3.9[169495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:44 np0005555520 python3.9[169618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461043.320007-437-29681922232669/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:45 np0005555520 python3.9[169770]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:50:45 np0005555520 python3.9[169922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:50:46 np0005555520 python3.9[170045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461045.5414155-462-42472686602126/.source.json _original_basename=.huygkwm_ follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:47 np0005555520 python3.9[170197]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:49 np0005555520 python3.9[170626]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 11 08:50:50 np0005555520 python3.9[170778]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:50:51 np0005555520 python3.9[170930]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 11 08:50:56 np0005555520 python3[171109]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:50:56 np0005555520 podman[171147]: 2025-12-11 13:50:56.506213585 +0000 UTC m=+0.058201166 container create 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 08:50:56 np0005555520 podman[171147]: 2025-12-11 13:50:56.474255755 +0000 UTC m=+0.026243386 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 11 08:50:56 np0005555520 python3[171109]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 11 08:50:57 np0005555520 python3.9[171337]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:58 np0005555520 python3.9[171491]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:58 np0005555520 python3.9[171567]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:50:58 np0005555520 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 11 08:50:59 np0005555520 python3.9[171719]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461058.732152-550-200063720256484/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:50:59 np0005555520 python3.9[171795]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:50:59 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:00 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:00 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:00 np0005555520 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 11 08:51:00 np0005555520 python3.9[171907]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:00 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:01 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:01 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:01 np0005555520 systemd[1]: Starting multipathd container...
Dec 11 08:51:01 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:51:01 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eeac5f92a624a37323027b30bc80b3f692bdd97aee4684ca0fb7bb979c4a898/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 11 08:51:01 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eeac5f92a624a37323027b30bc80b3f692bdd97aee4684ca0fb7bb979c4a898/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 11 08:51:01 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.
Dec 11 08:51:01 np0005555520 podman[171946]: 2025-12-11 13:51:01.397888331 +0000 UTC m=+0.140749076 container init 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 11 08:51:01 np0005555520 multipathd[171959]: + sudo -E kolla_set_configs
Dec 11 08:51:01 np0005555520 podman[171946]: 2025-12-11 13:51:01.440492321 +0000 UTC m=+0.183353036 container start 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 11 08:51:01 np0005555520 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 11 08:51:01 np0005555520 podman[171946]: multipathd
Dec 11 08:51:01 np0005555520 systemd[1]: Started multipathd container.
Dec 11 08:51:01 np0005555520 multipathd[171959]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:51:01 np0005555520 multipathd[171959]: INFO:__main__:Validating config file
Dec 11 08:51:01 np0005555520 multipathd[171959]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:51:01 np0005555520 multipathd[171959]: INFO:__main__:Writing out command to execute
Dec 11 08:51:01 np0005555520 multipathd[171959]: ++ cat /run_command
Dec 11 08:51:01 np0005555520 multipathd[171959]: + CMD='/usr/sbin/multipathd -d'
Dec 11 08:51:01 np0005555520 multipathd[171959]: + ARGS=
Dec 11 08:51:01 np0005555520 multipathd[171959]: + sudo kolla_copy_cacerts
Dec 11 08:51:01 np0005555520 multipathd[171959]: + [[ ! -n '' ]]
Dec 11 08:51:01 np0005555520 multipathd[171959]: + . kolla_extend_start
Dec 11 08:51:01 np0005555520 multipathd[171959]: Running command: '/usr/sbin/multipathd -d'
Dec 11 08:51:01 np0005555520 multipathd[171959]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 11 08:51:01 np0005555520 multipathd[171959]: + umask 0022
Dec 11 08:51:01 np0005555520 multipathd[171959]: + exec /usr/sbin/multipathd -d
Dec 11 08:51:01 np0005555520 podman[171968]: 2025-12-11 13:51:01.521630146 +0000 UTC m=+0.066579838 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 08:51:01 np0005555520 multipathd[171959]: 3122.478764 | --------start up--------
Dec 11 08:51:01 np0005555520 multipathd[171959]: 3122.478787 | read /etc/multipath.conf
Dec 11 08:51:01 np0005555520 systemd[1]: 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd-108fccc59dfa4fa6.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:51:01 np0005555520 systemd[1]: 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd-108fccc59dfa4fa6.service: Failed with result 'exit-code'.
Dec 11 08:51:01 np0005555520 multipathd[171959]: 3122.485198 | path checkers start up
Dec 11 08:51:02 np0005555520 python3.9[172151]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:51:02 np0005555520 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 11 08:51:02 np0005555520 python3.9[172306]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:03 np0005555520 python3.9[172471]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:51:03 np0005555520 systemd[1]: Stopping multipathd container...
Dec 11 08:51:03 np0005555520 multipathd[171959]: 3124.824991 | exit (signal)
Dec 11 08:51:03 np0005555520 multipathd[171959]: 3124.825899 | --------shut down-------
Dec 11 08:51:03 np0005555520 systemd[1]: libpod-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope: Deactivated successfully.
Dec 11 08:51:03 np0005555520 podman[172475]: 2025-12-11 13:51:03.910305897 +0000 UTC m=+0.074715975 container died 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 08:51:03 np0005555520 systemd[1]: 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd-108fccc59dfa4fa6.timer: Deactivated successfully.
Dec 11 08:51:03 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.
Dec 11 08:51:03 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd-userdata-shm.mount: Deactivated successfully.
Dec 11 08:51:03 np0005555520 systemd[1]: var-lib-containers-storage-overlay-3eeac5f92a624a37323027b30bc80b3f692bdd97aee4684ca0fb7bb979c4a898-merged.mount: Deactivated successfully.
Dec 11 08:51:03 np0005555520 podman[172475]: 2025-12-11 13:51:03.963670468 +0000 UTC m=+0.128080546 container cleanup 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 11 08:51:03 np0005555520 podman[172475]: multipathd
Dec 11 08:51:04 np0005555520 podman[172505]: multipathd
Dec 11 08:51:04 np0005555520 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 11 08:51:04 np0005555520 systemd[1]: Stopped multipathd container.
Dec 11 08:51:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:51:04.058 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:51:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:51:04.059 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:51:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:51:04.059 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:51:04 np0005555520 systemd[1]: Starting multipathd container...
Dec 11 08:51:04 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:51:04 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eeac5f92a624a37323027b30bc80b3f692bdd97aee4684ca0fb7bb979c4a898/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 11 08:51:04 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eeac5f92a624a37323027b30bc80b3f692bdd97aee4684ca0fb7bb979c4a898/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 11 08:51:04 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.
Dec 11 08:51:04 np0005555520 podman[172516]: 2025-12-11 13:51:04.199283487 +0000 UTC m=+0.121918090 container init 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 11 08:51:04 np0005555520 multipathd[172532]: + sudo -E kolla_set_configs
Dec 11 08:51:04 np0005555520 podman[172516]: 2025-12-11 13:51:04.221079519 +0000 UTC m=+0.143714092 container start 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:51:04 np0005555520 podman[172516]: multipathd
Dec 11 08:51:04 np0005555520 systemd[1]: Started multipathd container.
Dec 11 08:51:04 np0005555520 multipathd[172532]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:51:04 np0005555520 multipathd[172532]: INFO:__main__:Validating config file
Dec 11 08:51:04 np0005555520 multipathd[172532]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:51:04 np0005555520 multipathd[172532]: INFO:__main__:Writing out command to execute
Dec 11 08:51:04 np0005555520 podman[172539]: 2025-12-11 13:51:04.301881745 +0000 UTC m=+0.065764107 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 08:51:04 np0005555520 multipathd[172532]: ++ cat /run_command
Dec 11 08:51:04 np0005555520 multipathd[172532]: + CMD='/usr/sbin/multipathd -d'
Dec 11 08:51:04 np0005555520 multipathd[172532]: + ARGS=
Dec 11 08:51:04 np0005555520 multipathd[172532]: + sudo kolla_copy_cacerts
Dec 11 08:51:04 np0005555520 systemd[1]: 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd-3c69549f38d9b52b.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:51:04 np0005555520 systemd[1]: 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd-3c69549f38d9b52b.service: Failed with result 'exit-code'.
Dec 11 08:51:04 np0005555520 multipathd[172532]: + [[ ! -n '' ]]
Dec 11 08:51:04 np0005555520 multipathd[172532]: + . kolla_extend_start
Dec 11 08:51:04 np0005555520 multipathd[172532]: Running command: '/usr/sbin/multipathd -d'
Dec 11 08:51:04 np0005555520 multipathd[172532]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 11 08:51:04 np0005555520 multipathd[172532]: + umask 0022
Dec 11 08:51:04 np0005555520 multipathd[172532]: + exec /usr/sbin/multipathd -d
Dec 11 08:51:04 np0005555520 multipathd[172532]: 3125.299112 | --------start up--------
Dec 11 08:51:04 np0005555520 multipathd[172532]: 3125.299138 | read /etc/multipath.conf
Dec 11 08:51:04 np0005555520 multipathd[172532]: 3125.304693 | path checkers start up
Dec 11 08:51:04 np0005555520 python3.9[172723]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:05 np0005555520 python3.9[172875]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 11 08:51:06 np0005555520 python3.9[173027]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 11 08:51:06 np0005555520 kernel: Key type psk registered
Dec 11 08:51:07 np0005555520 python3.9[173189]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:51:07 np0005555520 python3.9[173312]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461066.746911-630-57650510043506/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:08 np0005555520 python3.9[173464]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:09 np0005555520 podman[173588]: 2025-12-11 13:51:09.101242168 +0000 UTC m=+0.064093873 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:51:09 np0005555520 python3.9[173633]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:51:09 np0005555520 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 11 08:51:09 np0005555520 systemd[1]: Stopped Load Kernel Modules.
Dec 11 08:51:09 np0005555520 systemd[1]: Stopping Load Kernel Modules...
Dec 11 08:51:09 np0005555520 systemd[1]: Starting Load Kernel Modules...
Dec 11 08:51:09 np0005555520 systemd[1]: Finished Load Kernel Modules.
Dec 11 08:51:10 np0005555520 python3.9[173791]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:51:12 np0005555520 podman[173796]: 2025-12-11 13:51:12.527879038 +0000 UTC m=+0.129056227 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 11 08:51:13 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:13 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:13 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:13 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:13 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:13 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:13 np0005555520 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 11 08:51:13 np0005555520 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 11 08:51:13 np0005555520 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:51:13 np0005555520 systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:51:14 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:14 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:14 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:14 np0005555520 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:51:15 np0005555520 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:51:15 np0005555520 systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:51:15 np0005555520 systemd[1]: man-db-cache-update.service: Consumed 1.781s CPU time.
Dec 11 08:51:15 np0005555520 systemd[1]: run-r327f15aab5cc434f97fd12b9bbf84c9b.service: Deactivated successfully.
Dec 11 08:51:15 np0005555520 python3.9[175209]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:51:15 np0005555520 iscsid[163612]: iscsid shutting down.
Dec 11 08:51:15 np0005555520 systemd[1]: Stopping Open-iSCSI...
Dec 11 08:51:15 np0005555520 systemd[1]: iscsid.service: Deactivated successfully.
Dec 11 08:51:15 np0005555520 systemd[1]: Stopped Open-iSCSI.
Dec 11 08:51:15 np0005555520 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 11 08:51:15 np0005555520 systemd[1]: Starting Open-iSCSI...
Dec 11 08:51:15 np0005555520 systemd[1]: Started Open-iSCSI.
Dec 11 08:51:16 np0005555520 python3.9[175439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:51:17 np0005555520 python3.9[175595]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:18 np0005555520 python3.9[175747]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:51:18 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:18 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:18 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:19 np0005555520 python3.9[175932]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:51:19 np0005555520 network[175949]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:51:19 np0005555520 network[175950]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:51:19 np0005555520 network[175951]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:51:24 np0005555520 python3.9[176225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:25 np0005555520 python3.9[176378]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:26 np0005555520 python3.9[176531]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:27 np0005555520 python3.9[176684]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:27 np0005555520 python3.9[176837]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:28 np0005555520 python3.9[176990]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:29 np0005555520 python3.9[177143]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:30 np0005555520 python3.9[177296]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:51:31 np0005555520 python3.9[177449]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:31 np0005555520 python3.9[177601]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:32 np0005555520 python3.9[177753]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:32 np0005555520 python3.9[177905]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:33 np0005555520 python3.9[178057]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:34 np0005555520 python3.9[178209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:34 np0005555520 podman[178233]: 2025-12-11 13:51:34.482678263 +0000 UTC m=+0.075623989 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 11 08:51:34 np0005555520 python3.9[178381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:35 np0005555520 python3.9[178533]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:36 np0005555520 python3.9[178685]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:36 np0005555520 python3.9[178837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:37 np0005555520 python3.9[178989]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:38 np0005555520 python3.9[179143]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:38 np0005555520 python3.9[179295]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:39 np0005555520 podman[179419]: 2025-12-11 13:51:39.242925151 +0000 UTC m=+0.058573735 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 08:51:39 np0005555520 python3.9[179467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:40 np0005555520 python3.9[179620]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:40 np0005555520 python3.9[179772]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:51:41 np0005555520 python3.9[179924]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:42 np0005555520 python3.9[180076]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:51:43 np0005555520 podman[180200]: 2025-12-11 13:51:43.000734415 +0000 UTC m=+0.095291328 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:51:43 np0005555520 python3.9[180248]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:51:43 np0005555520 systemd[1]: Reloading.
Dec 11 08:51:43 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:51:43 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:51:44 np0005555520 python3.9[180441]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:44 np0005555520 python3.9[180594]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:45 np0005555520 python3.9[180747]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:46 np0005555520 python3.9[180900]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:46 np0005555520 python3.9[181053]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:47 np0005555520 python3.9[181206]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:48 np0005555520 python3.9[181359]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:48 np0005555520 python3.9[181512]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:51:50 np0005555520 python3.9[181665]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:50 np0005555520 python3.9[181817]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:51 np0005555520 python3.9[181969]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:52 np0005555520 python3.9[182121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:52 np0005555520 python3.9[182273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:53 np0005555520 python3.9[182425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:54 np0005555520 python3.9[182577]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:54 np0005555520 python3.9[182729]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:55 np0005555520 python3.9[182881]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:51:55 np0005555520 python3.9[183033]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:00 np0005555520 python3.9[183187]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 11 08:52:01 np0005555520 python3.9[183340]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 08:52:02 np0005555520 python3.9[183498]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 08:52:03 np0005555520 systemd-logind[786]: New session 24 of user zuul.
Dec 11 08:52:03 np0005555520 systemd[1]: Started Session 24 of User zuul.
Dec 11 08:52:03 np0005555520 systemd[1]: session-24.scope: Deactivated successfully.
Dec 11 08:52:03 np0005555520 systemd-logind[786]: Session 24 logged out. Waiting for processes to exit.
Dec 11 08:52:03 np0005555520 systemd-logind[786]: Removed session 24.
Dec 11 08:52:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:52:04.060 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:52:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:52:04.061 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:52:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:52:04.062 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:52:04 np0005555520 python3.9[183684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:04 np0005555520 podman[183779]: 2025-12-11 13:52:04.790573521 +0000 UTC m=+0.077613478 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 08:52:04 np0005555520 python3.9[183815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461123.9292831-1229-241599671063264/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:05 np0005555520 python3.9[183975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:05 np0005555520 python3.9[184051]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:06 np0005555520 python3.9[184201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:07 np0005555520 python3.9[184322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461126.160292-1229-199569047596621/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:07 np0005555520 python3.9[184472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:08 np0005555520 python3.9[184593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461127.3463974-1229-145795101910813/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:08 np0005555520 python3.9[184743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:09 np0005555520 podman[184838]: 2025-12-11 13:52:09.396400285 +0000 UTC m=+0.066538204 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 11 08:52:09 np0005555520 python3.9[184883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461128.5306253-1229-186011470246985/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:10 np0005555520 python3.9[185035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:10 np0005555520 python3.9[185156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461129.7812376-1229-241784455110689/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:11 np0005555520 python3.9[185308]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:52:12 np0005555520 python3.9[185460]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:52:12 np0005555520 python3.9[185612]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:13 np0005555520 podman[185720]: 2025-12-11 13:52:13.496647347 +0000 UTC m=+0.093672712 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:52:13 np0005555520 python3.9[185787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:14 np0005555520 python3.9[185911]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765461133.195211-1336-26732717002385/.source _original_basename=.2l9un3lv follow=False checksum=9d7f001ac84b86a057129b5023ad7edbb52ae542 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 11 08:52:15 np0005555520 python3.9[186063]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:15 np0005555520 python3.9[186215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:16 np0005555520 python3.9[186336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461135.3159223-1362-36799781957931/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:17 np0005555520 python3.9[186486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:52:17 np0005555520 python3.9[186607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461136.5603113-1377-172773207966304/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:18 np0005555520 python3.9[186759]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 11 08:52:19 np0005555520 python3.9[186911]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:52:20 np0005555520 python3[187063]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:52:20 np0005555520 podman[187098]: 2025-12-11 13:52:20.29528015 +0000 UTC m=+0.053136452 container create 5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:52:20 np0005555520 podman[187098]: 2025-12-11 13:52:20.267297929 +0000 UTC m=+0.025154241 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 11 08:52:20 np0005555520 python3[187063]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 11 08:52:21 np0005555520 python3.9[187288]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:22 np0005555520 python3.9[187442]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 11 08:52:22 np0005555520 python3.9[187594]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:52:23 np0005555520 python3[187746]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:52:23 np0005555520 podman[187783]: 2025-12-11 13:52:23.911987228 +0000 UTC m=+0.100329716 container create 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 11 08:52:23 np0005555520 podman[187783]: 2025-12-11 13:52:23.842148025 +0000 UTC m=+0.030490523 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 11 08:52:23 np0005555520 python3[187746]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 11 08:52:24 np0005555520 python3.9[187972]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:25 np0005555520 python3.9[188126]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:52:26 np0005555520 python3.9[188279]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461145.541201-1469-61568559944507/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:52:26 np0005555520 python3.9[188355]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:52:26 np0005555520 systemd[1]: Reloading.
Dec 11 08:52:27 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:52:27 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:52:27 np0005555520 python3.9[188466]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:52:27 np0005555520 systemd[1]: Reloading.
Dec 11 08:52:28 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:52:28 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:52:28 np0005555520 systemd[1]: Starting nova_compute container...
Dec 11 08:52:28 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:52:28 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:28 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:28 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:28 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:28 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:28 np0005555520 podman[188506]: 2025-12-11 13:52:28.624060064 +0000 UTC m=+0.347754139 container init 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:52:28 np0005555520 podman[188506]: 2025-12-11 13:52:28.631748033 +0000 UTC m=+0.355442018 container start 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + sudo -E kolla_set_configs
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Validating config file
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying service configuration files
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Deleting /etc/ceph
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Creating directory /etc/ceph
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /etc/ceph
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Writing out command to execute
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:28 np0005555520 nova_compute[188522]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 11 08:52:28 np0005555520 nova_compute[188522]: ++ cat /run_command
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + CMD=nova-compute
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + ARGS=
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + sudo kolla_copy_cacerts
Dec 11 08:52:28 np0005555520 podman[188506]: nova_compute
Dec 11 08:52:28 np0005555520 systemd[1]: Started nova_compute container.
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + [[ ! -n '' ]]
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + . kolla_extend_start
Dec 11 08:52:28 np0005555520 nova_compute[188522]: Running command: 'nova-compute'
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + echo 'Running command: '\''nova-compute'\'''
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + umask 0022
Dec 11 08:52:28 np0005555520 nova_compute[188522]: + exec nova-compute
Dec 11 08:52:29 np0005555520 python3.9[188683]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:30 np0005555520 python3.9[188834]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:30 np0005555520 nova_compute[188522]: 2025-12-11 13:52:30.956 188526 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 11 08:52:30 np0005555520 nova_compute[188522]: 2025-12-11 13:52:30.956 188526 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 11 08:52:30 np0005555520 nova_compute[188522]: 2025-12-11 13:52:30.957 188526 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 11 08:52:30 np0005555520 nova_compute[188522]: 2025-12-11 13:52:30.957 188526 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec 11 08:52:31 np0005555520 nova_compute[188522]: 2025-12-11 13:52:31.130 188526 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 08:52:31 np0005555520 nova_compute[188522]: 2025-12-11 13:52:31.156 188526 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 08:52:31 np0005555520 nova_compute[188522]: 2025-12-11 13:52:31.157 188526 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec 11 08:52:31 np0005555520 python3.9[188986]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:32 np0005555520 python3.9[189140]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.300 188526 INFO nova.virt.driver [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec 11 08:52:32 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:52:32 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.405 188526 INFO nova.compute.provider_config [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.637 188526 DEBUG oslo_concurrency.lockutils [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.638 188526 DEBUG oslo_concurrency.lockutils [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.638 188526 DEBUG oslo_concurrency.lockutils [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.638 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.638 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.639 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.640 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.641 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.642 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.643 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.643 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.643 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.643 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.643 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.643 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.644 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.645 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.645 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.645 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.645 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.645 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.646 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.646 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.646 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.646 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.646 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.646 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.647 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.648 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.649 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.649 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.649 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.649 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.649 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.650 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.651 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.651 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.651 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.651 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.651 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.651 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.652 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.653 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.654 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.655 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.655 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.655 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.655 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.655 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.655 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.656 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.657 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.657 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.657 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.657 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.657 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.657 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.658 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.659 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.660 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.661 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.662 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.663 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.664 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.665 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.666 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.666 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.666 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.666 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.666 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.666 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.667 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.668 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.669 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.670 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.671 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.672 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.673 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.674 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.675 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.676 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.677 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.678 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.679 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.680 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.681 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.681 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.681 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.681 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.681 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.681 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.682 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.682 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.682 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.682 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.682 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.682 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.683 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.684 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.684 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.684 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.684 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.684 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.684 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.685 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.686 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.686 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.686 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.686 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.686 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.687 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.687 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.687 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.687 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.687 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.687 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.688 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.689 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.690 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.690 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.690 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.690 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.690 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.690 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.691 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.691 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.691 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.691 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.692 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.692 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.692 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.692 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.692 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.693 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.694 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.694 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.694 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.694 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.694 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.694 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.695 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.696 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.696 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.696 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.696 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.696 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.696 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.697 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.697 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.697 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.697 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.697 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.697 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.698 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.699 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.699 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.699 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.699 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.699 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.699 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.700 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.701 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.701 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.701 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.701 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.701 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.702 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.702 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.702 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.702 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.702 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.702 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.703 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.703 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.703 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.703 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.704 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.704 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.704 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.704 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.704 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.704 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.705 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.705 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.705 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.705 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.705 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.705 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.706 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.706 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.706 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.706 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.706 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.707 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.707 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.707 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.707 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.707 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.707 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.708 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.708 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.708 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.708 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.708 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.708 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.709 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.709 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.709 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.709 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.709 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.709 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.710 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.710 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.710 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.710 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.710 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.710 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.711 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.712 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.713 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.713 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.713 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.713 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.713 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.713 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.714 188526 WARNING oslo_config.cfg [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 11 08:52:32 np0005555520 nova_compute[188522]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 11 08:52:32 np0005555520 nova_compute[188522]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 11 08:52:32 np0005555520 nova_compute[188522]: and ``live_migration_inbound_addr`` respectively.
Dec 11 08:52:32 np0005555520 nova_compute[188522]: ).  Its value may be silently ignored in the future.#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.714 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.714 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.714 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.714 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.715 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.715 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.715 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.715 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.715 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.716 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.716 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.716 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.716 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.716 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.716 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.717 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.717 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.717 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.717 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.717 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.717 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.718 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.719 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.719 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.719 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.719 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.719 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.719 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.720 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.721 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.722 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.722 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.722 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.722 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.722 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.723 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.723 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.723 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.723 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.723 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.724 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.724 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.724 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.724 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.724 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.724 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.725 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.725 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.725 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.725 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.725 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.726 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.726 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.726 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.726 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.726 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.726 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.727 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.727 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.727 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.727 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.727 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.728 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.728 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.728 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.728 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.729 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.730 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.731 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.732 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.733 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.734 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.735 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.736 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.736 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.736 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.736 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.736 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.736 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.737 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.738 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.739 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.740 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.741 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.742 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.743 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.744 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.745 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.745 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.745 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.745 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.745 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.746 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.747 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.748 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.749 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.750 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.751 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.752 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.752 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.752 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.752 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.752 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.752 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.753 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.753 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.753 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.753 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.753 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.753 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.754 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.755 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.755 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.755 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.755 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.755 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.755 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.756 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.757 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.758 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.759 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.760 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.760 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.760 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.760 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.760 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.760 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.761 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.762 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.763 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.764 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.764 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.764 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.764 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.764 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.764 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.765 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.766 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.767 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.768 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.769 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.770 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.771 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.772 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.773 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.774 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.775 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.776 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.777 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.777 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.777 188526 DEBUG oslo_service.service [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.778 188526 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.920 188526 DEBUG nova.virt.libvirt.host [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.921 188526 DEBUG nova.virt.libvirt.host [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.923 188526 DEBUG nova.virt.libvirt.host [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec 11 08:52:32 np0005555520 nova_compute[188522]: 2025-12-11 13:52:32.924 188526 DEBUG nova.virt.libvirt.host [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec 11 08:52:32 np0005555520 systemd[1]: Starting libvirt QEMU daemon...
Dec 11 08:52:32 np0005555520 systemd[1]: Started libvirt QEMU daemon.
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.018 188526 DEBUG nova.virt.libvirt.host [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5c1159c5b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.024 188526 DEBUG nova.virt.libvirt.host [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5c1159c5b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.025 188526 INFO nova.virt.libvirt.driver [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.121 188526 WARNING nova.virt.libvirt.driver [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.121 188526 DEBUG nova.virt.libvirt.volume.mount [None req-f1e484f8-8c13-4569-8e86-982ad305f912 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec 11 08:52:33 np0005555520 python3.9[189317]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:52:33 np0005555520 systemd[1]: Stopping nova_compute container...
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.369 188526 DEBUG oslo_concurrency.lockutils [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.369 188526 DEBUG oslo_concurrency.lockutils [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 08:52:33 np0005555520 nova_compute[188522]: 2025-12-11 13:52:33.369 188526 DEBUG oslo_concurrency.lockutils [None req-5f4348d9-007a-4388-bbac-713b493f38cf - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 08:52:34 np0005555520 virtqemud[189338]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 11 08:52:34 np0005555520 virtqemud[189338]: hostname: compute-0
Dec 11 08:52:34 np0005555520 virtqemud[189338]: End of file while reading data: Input/output error
Dec 11 08:52:34 np0005555520 systemd[1]: libpod-23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706.scope: Deactivated successfully.
Dec 11 08:52:34 np0005555520 systemd[1]: libpod-23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706.scope: Consumed 3.377s CPU time.
Dec 11 08:52:34 np0005555520 podman[189372]: 2025-12-11 13:52:34.510038445 +0000 UTC m=+1.186767113 container died 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:52:34 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706-userdata-shm.mount: Deactivated successfully.
Dec 11 08:52:34 np0005555520 systemd[1]: var-lib-containers-storage-overlay-b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f-merged.mount: Deactivated successfully.
Dec 11 08:52:34 np0005555520 podman[189372]: 2025-12-11 13:52:34.575033398 +0000 UTC m=+1.251762066 container cleanup 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:52:34 np0005555520 podman[189372]: nova_compute
Dec 11 08:52:34 np0005555520 podman[189412]: nova_compute
Dec 11 08:52:34 np0005555520 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 11 08:52:34 np0005555520 systemd[1]: Stopped nova_compute container.
Dec 11 08:52:34 np0005555520 systemd[1]: Starting nova_compute container...
Dec 11 08:52:34 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:52:34 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:34 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:34 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:34 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:34 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69dc363896aefbbbce876c2b4f31316f9ec44fa9c8a1c1af5942a8ea3686d1f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:34 np0005555520 podman[189425]: 2025-12-11 13:52:34.76363471 +0000 UTC m=+0.101214538 container init 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Dec 11 08:52:34 np0005555520 podman[189425]: 2025-12-11 13:52:34.770365846 +0000 UTC m=+0.107945644 container start 23ef510a4d753dce9545abd61225e6cd784bd1ab6d9135deffcecaca7edf1706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 11 08:52:34 np0005555520 podman[189425]: nova_compute
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + sudo -E kolla_set_configs
Dec 11 08:52:34 np0005555520 systemd[1]: Started nova_compute container.
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Validating config file
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying service configuration files
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /etc/ceph
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Creating directory /etc/ceph
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /etc/ceph
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Writing out command to execute
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:34 np0005555520 nova_compute[189440]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 11 08:52:34 np0005555520 nova_compute[189440]: ++ cat /run_command
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + CMD=nova-compute
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + ARGS=
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + sudo kolla_copy_cacerts
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + [[ ! -n '' ]]
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + . kolla_extend_start
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + echo 'Running command: '\''nova-compute'\'''
Dec 11 08:52:34 np0005555520 nova_compute[189440]: Running command: 'nova-compute'
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + umask 0022
Dec 11 08:52:34 np0005555520 nova_compute[189440]: + exec nova-compute
Dec 11 08:52:35 np0005555520 podman[189575]: 2025-12-11 13:52:35.362283526 +0000 UTC m=+0.082569698 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 08:52:35 np0005555520 python3.9[189620]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 11 08:52:35 np0005555520 systemd[1]: Started libpod-conmon-5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd.scope.
Dec 11 08:52:35 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:52:35 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0468c486e2863e4baee0351279d1176af93678130e688640d283c5195ae494d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:35 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0468c486e2863e4baee0351279d1176af93678130e688640d283c5195ae494d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:35 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0468c486e2863e4baee0351279d1176af93678130e688640d283c5195ae494d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 11 08:52:35 np0005555520 podman[189646]: 2025-12-11 13:52:35.833731265 +0000 UTC m=+0.117026718 container init 5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 08:52:35 np0005555520 podman[189646]: 2025-12-11 13:52:35.849038052 +0000 UTC m=+0.132333485 container start 5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:52:35 np0005555520 python3.9[189620]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Applying nova statedir ownership
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 11 08:52:35 np0005555520 nova_compute_init[189668]: INFO:nova_statedir:Nova statedir ownership complete
Dec 11 08:52:35 np0005555520 systemd[1]: libpod-5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd.scope: Deactivated successfully.
Dec 11 08:52:35 np0005555520 podman[189683]: 2025-12-11 13:52:35.96529587 +0000 UTC m=+0.029557980 container died 5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible)
Dec 11 08:52:35 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd-userdata-shm.mount: Deactivated successfully.
Dec 11 08:52:36 np0005555520 systemd[1]: var-lib-containers-storage-overlay-a0468c486e2863e4baee0351279d1176af93678130e688640d283c5195ae494d-merged.mount: Deactivated successfully.
Dec 11 08:52:36 np0005555520 podman[189683]: 2025-12-11 13:52:36.005187383 +0000 UTC m=+0.069449473 container cleanup 5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 11 08:52:36 np0005555520 systemd[1]: libpod-conmon-5545c240e1df053844d735e7b5726039086457df4094b4e13dc54ff0f7b04cbd.scope: Deactivated successfully.
Dec 11 08:52:36 np0005555520 systemd[1]: session-23.scope: Deactivated successfully.
Dec 11 08:52:36 np0005555520 systemd[1]: session-23.scope: Consumed 2min 292ms CPU time.
Dec 11 08:52:36 np0005555520 systemd-logind[786]: Session 23 logged out. Waiting for processes to exit.
Dec 11 08:52:36 np0005555520 systemd-logind[786]: Removed session 23.
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.018 189444 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.019 189444 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.019 189444 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.019 189444 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.201 189444 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.224 189444 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 08:52:37 np0005555520 nova_compute[189440]: 2025-12-11 13:52:37.224 189444 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.008 189444 INFO nova.virt.driver [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.144 189444 INFO nova.compute.provider_config [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.487 189444 DEBUG oslo_concurrency.lockutils [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.487 189444 DEBUG oslo_concurrency.lockutils [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.488 189444 DEBUG oslo_concurrency.lockutils [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.488 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.488 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.489 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.489 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.489 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.489 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.490 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.490 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.490 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.490 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.490 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.491 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.491 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.491 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.491 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.491 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.492 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.492 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.492 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.492 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.492 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.493 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.493 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.493 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.493 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.494 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.494 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.494 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.494 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.495 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.495 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.495 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.495 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.495 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.496 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.496 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.496 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.496 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.497 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.497 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.497 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.497 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.497 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.498 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.498 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.498 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.498 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.499 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.499 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.499 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.499 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.500 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.500 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.500 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.500 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.501 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.501 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.501 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.501 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.502 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.502 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.502 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.502 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.503 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.503 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.503 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.503 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.504 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.504 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.504 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.504 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.505 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.505 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.505 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.505 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.505 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.506 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.506 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.506 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.506 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.506 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.507 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.507 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.507 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.507 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.507 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.508 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.508 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.508 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.508 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.509 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.509 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.509 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.509 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.510 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.510 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.510 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.510 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.510 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.511 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.511 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.511 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.511 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.511 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.512 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.512 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.512 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.512 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.512 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.513 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.513 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.513 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.513 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.514 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.514 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.514 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.514 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.514 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.515 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.515 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.515 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.515 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.515 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.516 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.516 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.516 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.516 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.517 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.517 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.517 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.517 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.517 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.518 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.518 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.518 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.518 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.518 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.519 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.519 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.519 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.519 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.519 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.520 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.520 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.520 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.520 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.521 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.521 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.521 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.521 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.522 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.522 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.522 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.522 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.523 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.523 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.523 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.523 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.523 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.524 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.524 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.524 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.524 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.524 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.525 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.525 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.525 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.525 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.526 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.526 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.526 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.526 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.526 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.527 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.527 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.527 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.527 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.527 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.528 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.528 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.528 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.528 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.528 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.529 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.529 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.529 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.529 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.529 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.530 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.530 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.530 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.530 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.530 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.531 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.531 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.531 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.531 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.531 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.532 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.532 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.532 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.532 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.532 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.533 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.533 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.533 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.533 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.534 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.534 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.534 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.534 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.534 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.535 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.535 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.535 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.535 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.536 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.536 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.536 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.536 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.536 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.537 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.537 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.537 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.537 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.537 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.538 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.538 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.538 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.538 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.538 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.539 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.539 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.539 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.539 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.539 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.540 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.540 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.540 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.540 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.540 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.541 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.541 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.541 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.541 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.542 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.542 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.542 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.542 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.542 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.543 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.543 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.543 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.543 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.543 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.544 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.544 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.544 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.544 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.544 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.545 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.545 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.545 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.545 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.546 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.546 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.546 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.546 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.547 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.547 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.547 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.547 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.547 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.548 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.548 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.548 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.548 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.549 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.549 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.549 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.549 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.550 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.550 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.550 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.550 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.551 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.551 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.551 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.551 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.552 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.552 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.552 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.552 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.553 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.553 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.553 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.553 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.554 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.554 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.554 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.554 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.555 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.555 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.555 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.555 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.555 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.555 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.556 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.557 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.558 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.559 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.559 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.559 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.559 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.559 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.560 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.561 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.562 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.562 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.562 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.562 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.562 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.563 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.563 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.563 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.563 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.563 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.564 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.565 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.566 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.567 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.568 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.569 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.569 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.569 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.569 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.569 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.569 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.570 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.570 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.570 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.570 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.570 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.570 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.571 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.572 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.573 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.573 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.573 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.573 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.573 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.573 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.574 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.574 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.574 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.574 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.574 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.575 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.575 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.575 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.575 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.575 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.576 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.576 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.576 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.576 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.576 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.576 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.577 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.577 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.577 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.577 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.577 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.577 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.578 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.578 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.578 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.578 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.578 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.579 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.580 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.581 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.581 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.581 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.581 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.581 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.581 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.582 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.582 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.582 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.582 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.582 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.582 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.583 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.583 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.583 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.583 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.583 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.583 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.584 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.584 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.584 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.584 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.584 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.585 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.585 189444 WARNING oslo_config.cfg [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 11 08:52:38 np0005555520 nova_compute[189440]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 11 08:52:38 np0005555520 nova_compute[189440]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 11 08:52:38 np0005555520 nova_compute[189440]: and ``live_migration_inbound_addr`` respectively.
Dec 11 08:52:38 np0005555520 nova_compute[189440]: ).  Its value may be silently ignored in the future.#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.585 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.585 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.586 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.586 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.586 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.586 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.587 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.588 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.588 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.588 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.588 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.588 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.588 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.589 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.590 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.590 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.590 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.590 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.590 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.590 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.591 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.591 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.591 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.591 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.591 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.591 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.592 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.593 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.593 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.593 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.593 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.593 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.593 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.594 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.595 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.596 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.596 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.596 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.596 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.596 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.597 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.598 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.598 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.598 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.598 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.598 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.599 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.600 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.601 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.602 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.603 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.604 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.605 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.605 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.605 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.605 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.605 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.605 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.606 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.606 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.606 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.606 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.606 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.606 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.607 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.607 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.607 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.607 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.607 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.607 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.608 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.608 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.608 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.608 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.608 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.608 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.609 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.610 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.610 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.610 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.610 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.610 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.610 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.611 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.611 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.611 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.611 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.611 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.611 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.612 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.612 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.612 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.612 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.612 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.612 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.613 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.613 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.613 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.613 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.613 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.613 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.614 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.614 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.614 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.614 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.614 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.614 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.615 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.615 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.615 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.615 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.615 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.616 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.616 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.616 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.616 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.617 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.617 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.617 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.617 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.617 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.617 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.618 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.618 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.618 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.618 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.618 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.619 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.619 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.619 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.619 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.619 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.619 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.620 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.620 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.620 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.620 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.620 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.620 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.621 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.621 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.621 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.621 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.621 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.621 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.622 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.623 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.624 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.625 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.625 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.625 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.625 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.625 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.626 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.627 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.628 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.629 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.630 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.631 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.631 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.631 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.631 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.631 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.631 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.632 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.633 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.633 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.633 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.633 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.633 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.633 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.634 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.635 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.636 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.637 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.638 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.638 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.638 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.638 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.638 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.638 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.639 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.640 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.641 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.642 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.643 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.644 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.645 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.646 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.646 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.646 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.646 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.646 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.646 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.647 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.648 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.649 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.650 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.650 189444 DEBUG oslo_service.service [None req-df9ebdc1-b28b-49d8-a704-90c33afa4ede - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.651 189444 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.736 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.737 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.738 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.738 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.755 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7eff85f19b80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.758 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7eff85f19b80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.760 189444 INFO nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Connection event '1' reason 'None'#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.771 189444 INFO nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Libvirt host capabilities <capabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <host>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <uuid>07853827-0613-4815-9650-41016faf3709</uuid>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <cpu>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <arch>x86_64</arch>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model>EPYC-Rome-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <vendor>AMD</vendor>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <microcode version='16777317'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <signature family='23' model='49' stepping='0'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <maxphysaddr mode='emulate' bits='40'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='x2apic'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='tsc-deadline'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='osxsave'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='hypervisor'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='tsc_adjust'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='spec-ctrl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='stibp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='arch-capabilities'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='cmp_legacy'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='topoext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='virt-ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='lbrv'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='tsc-scale'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='vmcb-clean'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='pause-filter'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='pfthreshold'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='svme-addr-chk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='rdctl-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='skip-l1dfl-vmentry'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='mds-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature name='pschange-mc-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <pages unit='KiB' size='4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <pages unit='KiB' size='2048'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <pages unit='KiB' size='1048576'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </cpu>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <power_management>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <suspend_mem/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <suspend_disk/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <suspend_hybrid/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </power_management>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <iommu support='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <migration_features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <live/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <uri_transports>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <uri_transport>tcp</uri_transport>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <uri_transport>rdma</uri_transport>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </uri_transports>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </migration_features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <topology>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <cells num='1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <cell id='0'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          <memory unit='KiB'>7864300</memory>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          <pages unit='KiB' size='4'>1966075</pages>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          <pages unit='KiB' size='2048'>0</pages>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          <pages unit='KiB' size='1048576'>0</pages>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          <distances>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <sibling id='0' value='10'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          </distances>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          <cpus num='8'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:          </cpus>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        </cell>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </cells>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </topology>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <cache>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </cache>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <secmodel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model>selinux</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <doi>0</doi>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </secmodel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <secmodel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model>dac</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <doi>0</doi>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </secmodel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </host>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <guest>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <os_type>hvm</os_type>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <arch name='i686'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <wordsize>32</wordsize>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <domain type='qemu'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <domain type='kvm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </arch>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <pae/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <nonpae/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <acpi default='on' toggle='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <apic default='on' toggle='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <cpuselection/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <deviceboot/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <disksnapshot default='on' toggle='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <externalSnapshot/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </guest>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <guest>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <os_type>hvm</os_type>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <arch name='x86_64'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <wordsize>64</wordsize>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <domain type='qemu'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <domain type='kvm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </arch>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <acpi default='on' toggle='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <apic default='on' toggle='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <cpuselection/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <deviceboot/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <disksnapshot default='on' toggle='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <externalSnapshot/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </guest>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 
Dec 11 08:52:38 np0005555520 nova_compute[189440]: </capabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: #033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.786 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.814 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 11 08:52:38 np0005555520 nova_compute[189440]: <domainCapabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <path>/usr/libexec/qemu-kvm</path>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <domain>kvm</domain>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <arch>i686</arch>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <vcpu max='240'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <iothreads supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <os supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <enum name='firmware'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <loader supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>rom</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pflash</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='readonly'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>yes</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='secure'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </loader>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </os>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <cpu>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='host-passthrough' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='hostPassthroughMigratable'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='maximum' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='maximumMigratable'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='host-model' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <vendor>AMD</vendor>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='x2apic'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-deadline'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='hypervisor'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc_adjust'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='spec-ctrl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='stibp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='cmp_legacy'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='overflow-recov'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='succor'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='ibrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='amd-ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='virt-ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='lbrv'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-scale'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='vmcb-clean'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='flushbyasid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='pause-filter'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='pfthreshold'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='svme-addr-chk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='disable' name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='custom' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Dhyana-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10-128'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10-256'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10-512'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v6'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v7'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SierraForest'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SierraForest-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='athlon'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='athlon-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='core2duo'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='core2duo-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='coreduo'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='coreduo-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='n270'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='n270-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='phenom'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='phenom-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </cpu>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <memoryBacking supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <enum name='sourceType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>file</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>anonymous</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>memfd</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </memoryBacking>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <devices>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <disk supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='diskDevice'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>disk</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>cdrom</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>floppy</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>lun</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>ide</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>fdc</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>sata</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </disk>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <graphics supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vnc</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>egl-headless</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </graphics>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <video supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='modelType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vga</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>cirrus</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>none</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>bochs</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>ramfb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </video>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <hostdev supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='mode'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>subsystem</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='startupPolicy'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>mandatory</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>requisite</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>optional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='subsysType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pci</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='capsType'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='pciBackend'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </hostdev>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <rng supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>random</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>egd</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </rng>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <filesystem supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='driverType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>path</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>handle</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtiofs</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </filesystem>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <tpm supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tpm-tis</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tpm-crb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>emulator</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>external</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendVersion'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>2.0</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </tpm>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <redirdev supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </redirdev>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <channel supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </channel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <crypto supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>qemu</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </crypto>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <interface supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>passt</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </interface>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <panic supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>isa</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>hyperv</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </panic>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <console supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>null</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vc</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>dev</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>file</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pipe</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>stdio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>udp</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tcp</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>qemu-vdagent</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </console>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </devices>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <gic supported='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <vmcoreinfo supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <genid supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <backingStoreInput supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <backup supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <async-teardown supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <ps2 supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <sev supported='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <sgx supported='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <hyperv supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='features'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>relaxed</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vapic</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>spinlocks</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vpindex</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>runtime</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>synic</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>stimer</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>reset</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vendor_id</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>frequencies</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>reenlightenment</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tlbflush</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>ipi</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>avic</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>emsr_bitmap</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>xmm_input</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <defaults>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <spinlocks>4095</spinlocks>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <stimer_direct>on</stimer_direct>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <tlbflush_direct>on</tlbflush_direct>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <tlbflush_extended>on</tlbflush_extended>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </defaults>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </hyperv>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <launchSecurity supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='sectype'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tdx</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </launchSecurity>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: </domainCapabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.826 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 11 08:52:38 np0005555520 nova_compute[189440]: <domainCapabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <path>/usr/libexec/qemu-kvm</path>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <domain>kvm</domain>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <machine>pc-q35-rhel9.8.0</machine>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <arch>i686</arch>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <vcpu max='4096'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <iothreads supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <os supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <enum name='firmware'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <loader supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>rom</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pflash</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='readonly'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>yes</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='secure'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </loader>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </os>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <cpu>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='host-passthrough' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='hostPassthroughMigratable'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='maximum' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='maximumMigratable'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='host-model' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <vendor>AMD</vendor>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='x2apic'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-deadline'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='hypervisor'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc_adjust'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='spec-ctrl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='stibp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='cmp_legacy'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='overflow-recov'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='succor'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='ibrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='amd-ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='virt-ssbd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='lbrv'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-scale'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='vmcb-clean'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='flushbyasid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='pause-filter'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='pfthreshold'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='svme-addr-chk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <feature policy='disable' name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <mode name='custom' supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Dhyana-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10-128'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10-256'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx10-512'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-noTSX'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v6'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v7'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SierraForest'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='SierraForest-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v5'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v2'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v3'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v4'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='athlon'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='athlon-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='core2duo'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='core2duo-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='coreduo'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='coreduo-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='n270'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='n270-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='phenom'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <blockers model='phenom-v1'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </cpu>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <memoryBacking supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <enum name='sourceType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>file</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>anonymous</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <value>memfd</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </memoryBacking>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <devices>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <disk supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='diskDevice'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>disk</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>cdrom</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>floppy</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>lun</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>fdc</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>sata</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </disk>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <graphics supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vnc</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>egl-headless</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </graphics>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <video supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='modelType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vga</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>cirrus</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>none</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>bochs</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>ramfb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </video>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <hostdev supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='mode'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>subsystem</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='startupPolicy'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>mandatory</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>requisite</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>optional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='subsysType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pci</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='capsType'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='pciBackend'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </hostdev>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <rng supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>random</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>egd</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </rng>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <filesystem supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='driverType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>path</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>handle</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>virtiofs</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </filesystem>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <tpm supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tpm-tis</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tpm-crb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>emulator</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>external</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendVersion'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>2.0</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </tpm>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <redirdev supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </redirdev>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <channel supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </channel>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <crypto supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>qemu</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </crypto>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <interface supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='backendType'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>passt</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </interface>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <panic supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>isa</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>hyperv</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </panic>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <console supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>null</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vc</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>dev</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>file</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>pipe</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>stdio</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>udp</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tcp</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>qemu-vdagent</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </console>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </devices>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <gic supported='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <vmcoreinfo supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <genid supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <backingStoreInput supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <backup supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <async-teardown supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <ps2 supported='yes'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <sev supported='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <sgx supported='no'/>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <hyperv supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='features'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>relaxed</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vapic</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>spinlocks</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vpindex</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>runtime</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>synic</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>stimer</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>reset</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>vendor_id</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>frequencies</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>reenlightenment</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tlbflush</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>ipi</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>avic</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>emsr_bitmap</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>xmm_input</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <defaults>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <spinlocks>4095</spinlocks>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <stimer_direct>on</stimer_direct>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <tlbflush_direct>on</tlbflush_direct>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <tlbflush_extended>on</tlbflush_extended>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </defaults>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </hyperv>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    <launchSecurity supported='yes'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      <enum name='sectype'>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:        <value>tdx</value>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:    </launchSecurity>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  </features>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: </domainCapabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.877 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.879 189444 WARNING nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.880 189444 DEBUG nova.virt.libvirt.volume.mount [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec 11 08:52:38 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.885 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 11 08:52:38 np0005555520 nova_compute[189440]: <domainCapabilities>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <path>/usr/libexec/qemu-kvm</path>
Dec 11 08:52:38 np0005555520 nova_compute[189440]:  <domain>kvm</domain>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <machine>pc-q35-rhel9.8.0</machine>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <arch>x86_64</arch>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <vcpu max='4096'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <iothreads supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <os supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <enum name='firmware'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>efi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <loader supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>rom</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pflash</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='readonly'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>yes</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='secure'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>yes</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </loader>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </os>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <cpu>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='host-passthrough' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='hostPassthroughMigratable'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='maximum' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='maximumMigratable'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='host-model' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <vendor>AMD</vendor>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='x2apic'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-deadline'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='hypervisor'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc_adjust'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='spec-ctrl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='stibp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='ssbd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='cmp_legacy'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='overflow-recov'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='succor'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='ibrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='amd-ssbd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='virt-ssbd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='lbrv'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-scale'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='vmcb-clean'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='flushbyasid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='pause-filter'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='pfthreshold'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='svme-addr-chk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='disable' name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='custom' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Dhyana-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10-128'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10-256'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10-512'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v6'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v7'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SierraForest'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SierraForest-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='athlon'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='athlon-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='core2duo'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='core2duo-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='coreduo'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='coreduo-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='n270'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='n270-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='phenom'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='phenom-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </cpu>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <memoryBacking supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <enum name='sourceType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>file</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>anonymous</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>memfd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </memoryBacking>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <devices>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <disk supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='diskDevice'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>disk</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>cdrom</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>floppy</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>lun</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>fdc</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>sata</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </disk>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <graphics supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vnc</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>egl-headless</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </graphics>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <video supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='modelType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vga</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>cirrus</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>none</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>bochs</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>ramfb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </video>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <hostdev supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='mode'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>subsystem</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='startupPolicy'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>mandatory</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>requisite</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>optional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='subsysType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pci</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='capsType'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='pciBackend'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </hostdev>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <rng supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>random</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>egd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </rng>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <filesystem supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='driverType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>path</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>handle</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtiofs</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </filesystem>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <tpm supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tpm-tis</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tpm-crb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>emulator</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>external</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendVersion'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>2.0</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </tpm>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <redirdev supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </redirdev>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <channel supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </channel>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <crypto supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>qemu</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </crypto>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <interface supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>passt</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </interface>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <panic supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>isa</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>hyperv</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </panic>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <console supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>null</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vc</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>dev</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>file</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pipe</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>stdio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>udp</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tcp</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>qemu-vdagent</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </console>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </devices>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <features>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <gic supported='no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <vmcoreinfo supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <genid supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <backingStoreInput supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <backup supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <async-teardown supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <ps2 supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <sev supported='no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <sgx supported='no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <hyperv supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='features'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>relaxed</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vapic</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>spinlocks</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vpindex</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>runtime</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>synic</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>stimer</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>reset</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vendor_id</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>frequencies</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>reenlightenment</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tlbflush</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>ipi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>avic</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>emsr_bitmap</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>xmm_input</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <defaults>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <spinlocks>4095</spinlocks>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <stimer_direct>on</stimer_direct>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <tlbflush_direct>on</tlbflush_direct>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <tlbflush_extended>on</tlbflush_extended>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </defaults>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </hyperv>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <launchSecurity supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='sectype'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tdx</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </launchSecurity>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </features>
Dec 11 08:52:39 np0005555520 nova_compute[189440]: </domainCapabilities>
Dec 11 08:52:39 np0005555520 nova_compute[189440]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:38.964 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 11 08:52:39 np0005555520 nova_compute[189440]: <domainCapabilities>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <path>/usr/libexec/qemu-kvm</path>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <domain>kvm</domain>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <arch>x86_64</arch>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <vcpu max='240'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <iothreads supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <os supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <enum name='firmware'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <loader supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>rom</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pflash</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='readonly'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>yes</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='secure'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>no</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </loader>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </os>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <cpu>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='host-passthrough' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='hostPassthroughMigratable'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='maximum' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='maximumMigratable'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>on</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>off</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='host-model' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <vendor>AMD</vendor>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='x2apic'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-deadline'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='hypervisor'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc_adjust'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='spec-ctrl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='stibp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='ssbd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='cmp_legacy'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='overflow-recov'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='succor'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='ibrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='amd-ssbd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='virt-ssbd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='lbrv'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='tsc-scale'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='vmcb-clean'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='flushbyasid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='pause-filter'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='pfthreshold'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='svme-addr-chk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <feature policy='disable' name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <mode name='custom' supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Broadwell-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cascadelake-Server-v5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Cooperlake-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Denverton-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Dhyana-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Genoa-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='auto-ibrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Milan-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amd-psfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='no-nested-data-bp'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='null-sel-clr-base'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='stibp-always-on'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-Rome-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='EPYC-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='GraniteRapids-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10-128'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10-256'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx10-512'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='prefetchiti'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Haswell-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-noTSX'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v6'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Icelake-Server-v7'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='IvyBridge-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='KnightsMill-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4fmaps'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-4vnniw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512er'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512pf'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G4-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Opteron_G5-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fma4'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tbm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xop'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SapphireRapids-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='amx-tile'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-bf16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-fp16'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512-vpopcntdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bitalg'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vbmi2'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrc'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fzrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='la57'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='taa-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='tsx-ldtrk'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xfd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SierraForest'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='SierraForest-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ifma'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-ne-convert'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx-vnni-int8'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='bus-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cmpccxadd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fbsdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='fsrs'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ibrs-all'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mcdt-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pbrsb-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='psdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='sbdr-ssdp-no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='serialize'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vaes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='vpclmulqdq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Client-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='hle'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='rtm'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Skylake-Server-v5'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512bw'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512cd'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512dq'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512f'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='avx512vl'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='invpcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pcid'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='pku'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='mpx'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v2'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v3'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='core-capability'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='split-lock-detect'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='Snowridge-v4'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='cldemote'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='erms'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='gfni'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdir64b'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='movdiri'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='xsaves'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='athlon'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='athlon-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='core2duo'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='core2duo-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='coreduo'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='coreduo-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='n270'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='n270-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='ss'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='phenom'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <blockers model='phenom-v1'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnow'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <feature name='3dnowext'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </blockers>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </mode>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </cpu>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <memoryBacking supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <enum name='sourceType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>file</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>anonymous</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <value>memfd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </memoryBacking>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <devices>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <disk supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='diskDevice'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>disk</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>cdrom</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>floppy</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>lun</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>ide</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>fdc</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>sata</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </disk>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <graphics supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vnc</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>egl-headless</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </graphics>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <video supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='modelType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vga</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>cirrus</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>none</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>bochs</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>ramfb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </video>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <hostdev supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='mode'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>subsystem</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='startupPolicy'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>mandatory</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>requisite</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>optional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='subsysType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pci</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>scsi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='capsType'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='pciBackend'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </hostdev>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <rng supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtio-non-transitional</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>random</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>egd</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </rng>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <filesystem supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='driverType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>path</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>handle</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>virtiofs</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </filesystem>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <tpm supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tpm-tis</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tpm-crb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>emulator</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>external</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendVersion'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>2.0</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </tpm>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <redirdev supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='bus'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>usb</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </redirdev>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <channel supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </channel>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <crypto supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>qemu</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendModel'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>builtin</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </crypto>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <interface supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='backendType'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>default</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>passt</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </interface>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <panic supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='model'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>isa</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>hyperv</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </panic>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <console supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='type'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>null</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vc</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pty</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>dev</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>file</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>pipe</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>stdio</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>udp</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tcp</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>unix</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>qemu-vdagent</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>dbus</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </console>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </devices>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  <features>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <gic supported='no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <vmcoreinfo supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <genid supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <backingStoreInput supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <backup supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <async-teardown supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <ps2 supported='yes'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <sev supported='no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <sgx supported='no'/>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <hyperv supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='features'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>relaxed</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vapic</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>spinlocks</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vpindex</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>runtime</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>synic</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>stimer</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>reset</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>vendor_id</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>frequencies</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>reenlightenment</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tlbflush</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>ipi</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>avic</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>emsr_bitmap</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>xmm_input</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <defaults>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <spinlocks>4095</spinlocks>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <stimer_direct>on</stimer_direct>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <tlbflush_direct>on</tlbflush_direct>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <tlbflush_extended>on</tlbflush_extended>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </defaults>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </hyperv>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    <launchSecurity supported='yes'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      <enum name='sectype'>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:        <value>tdx</value>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:      </enum>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:    </launchSecurity>
Dec 11 08:52:39 np0005555520 nova_compute[189440]:  </features>
Dec 11 08:52:39 np0005555520 nova_compute[189440]: </domainCapabilities>
Dec 11 08:52:39 np0005555520 nova_compute[189440]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.052 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.052 189444 INFO nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Secure Boot support detected#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.056 189444 INFO nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.056 189444 INFO nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.065 189444 DEBUG nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.143 189444 INFO nova.virt.node [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Determined node identity 1bda6308-729f-4919-a8ba-89570b8721fc from /var/lib/nova/compute_id#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.608 189444 WARNING nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Compute nodes ['1bda6308-729f-4919-a8ba-89570b8721fc'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec 11 08:52:39 np0005555520 nova_compute[189440]: 2025-12-11 13:52:39.828 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.030 189444 WARNING nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.031 189444 DEBUG oslo_concurrency.lockutils [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.031 189444 DEBUG oslo_concurrency.lockutils [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.031 189444 DEBUG oslo_concurrency.lockutils [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.032 189444 DEBUG nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 08:52:40 np0005555520 systemd[1]: Starting libvirt nodedev daemon...
Dec 11 08:52:40 np0005555520 systemd[1]: Started libvirt nodedev daemon.
Dec 11 08:52:40 np0005555520 podman[189763]: 2025-12-11 13:52:40.15573848 +0000 UTC m=+0.061393586 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.369 189444 WARNING nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.370 189444 DEBUG nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5994MB free_disk=72.6003646850586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.371 189444 DEBUG oslo_concurrency.lockutils [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.371 189444 DEBUG oslo_concurrency.lockutils [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.492 189444 WARNING nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] No compute node record for compute-0.ctlplane.example.com:1bda6308-729f-4919-a8ba-89570b8721fc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1bda6308-729f-4919-a8ba-89570b8721fc could not be found.#033[00m
Dec 11 08:52:40 np0005555520 nova_compute[189440]: 2025-12-11 13:52:40.797 189444 INFO nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1bda6308-729f-4919-a8ba-89570b8721fc#033[00m
Dec 11 08:52:41 np0005555520 nova_compute[189440]: 2025-12-11 13:52:41.037 189444 DEBUG nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 08:52:41 np0005555520 nova_compute[189440]: 2025-12-11 13:52:41.037 189444 DEBUG nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 08:52:42 np0005555520 systemd-logind[786]: New session 25 of user zuul.
Dec 11 08:52:42 np0005555520 systemd[1]: Started Session 25 of User zuul.
Dec 11 08:52:42 np0005555520 nova_compute[189440]: 2025-12-11 13:52:42.381 189444 INFO nova.scheduler.client.report [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [req-1a76643c-a60b-475d-b716-5e3fad969cf6] Created resource provider record via placement API for resource provider with UUID 1bda6308-729f-4919-a8ba-89570b8721fc and name compute-0.ctlplane.example.com.#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.116 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 11 08:52:43 np0005555520 nova_compute[189440]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.116 189444 INFO nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.117 189444 DEBUG nova.compute.provider_tree [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.117 189444 DEBUG nova.virt.libvirt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 08:52:43 np0005555520 python3.9[189959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.372 189444 DEBUG nova.scheduler.client.report [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Updated inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.372 189444 DEBUG nova.compute.provider_tree [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Updating resource provider 1bda6308-729f-4919-a8ba-89570b8721fc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.373 189444 DEBUG nova.compute.provider_tree [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.633 189444 DEBUG nova.compute.provider_tree [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Updating resource provider 1bda6308-729f-4919-a8ba-89570b8721fc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.757 189444 DEBUG nova.compute.resource_tracker [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.758 189444 DEBUG oslo_concurrency.lockutils [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:52:43 np0005555520 nova_compute[189440]: 2025-12-11 13:52:43.758 189444 DEBUG nova.service [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec 11 08:52:44 np0005555520 nova_compute[189440]: 2025-12-11 13:52:44.375 189444 DEBUG nova.service [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec 11 08:52:44 np0005555520 nova_compute[189440]: 2025-12-11 13:52:44.376 189444 DEBUG nova.servicegroup.drivers.db [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec 11 08:52:44 np0005555520 podman[190087]: 2025-12-11 13:52:44.503887179 +0000 UTC m=+0.112822493 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 08:52:44 np0005555520 python3.9[190132]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:52:44 np0005555520 systemd[1]: Reloading.
Dec 11 08:52:44 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:52:44 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:52:45 np0005555520 python3.9[190326]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:52:45 np0005555520 network[190343]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:52:45 np0005555520 network[190344]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:52:45 np0005555520 network[190345]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:52:51 np0005555520 nova_compute[189440]: 2025-12-11 13:52:51.378 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:52:51 np0005555520 nova_compute[189440]: 2025-12-11 13:52:51.397 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:52:52 np0005555520 python3.9[190619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:52:53 np0005555520 python3.9[190772]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:52:53 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:52:53 np0005555520 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:52:53 np0005555520 python3.9[190925]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:52:54 np0005555520 python3.9[191077]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:52:55 np0005555520 python3.9[191229]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:52:56 np0005555520 python3.9[191381]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:52:56 np0005555520 systemd[1]: Reloading.
Dec 11 08:52:56 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:52:56 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:52:57 np0005555520 python3.9[191568]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:52:58 np0005555520 python3.9[191721]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:52:58 np0005555520 python3.9[191871]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:52:59 np0005555520 python3.9[192023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:00 np0005555520 python3.9[192144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461179.080716-133-13617828139302/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:53:01 np0005555520 python3.9[192296]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 11 08:53:02 np0005555520 python3.9[192448]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 11 08:53:02 np0005555520 python3.9[192601]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 08:53:03 np0005555520 python3.9[192759]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 08:53:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:53:04.061 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:53:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:53:04.062 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:53:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:53:04.062 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:53:05 np0005555520 python3.9[192917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:05 np0005555520 podman[192988]: 2025-12-11 13:53:05.498166905 +0000 UTC m=+0.084762482 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 11 08:53:05 np0005555520 python3.9[193058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461184.6712244-201-240416597905192/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:06 np0005555520 python3.9[193208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:06 np0005555520 python3.9[193329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461185.8340652-201-198461394817255/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:07 np0005555520 python3.9[193479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:08 np0005555520 python3.9[193600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461187.0480099-201-68589480626555/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:08 np0005555520 python3.9[193750]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:53:09 np0005555520 python3.9[193902]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:53:10 np0005555520 python3.9[194054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:10 np0005555520 podman[194149]: 2025-12-11 13:53:10.451963254 +0000 UTC m=+0.060985026 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:53:10 np0005555520 python3.9[194187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461189.5772524-260-52133828048701/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:11 np0005555520 python3.9[194342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:11 np0005555520 python3.9[194418]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:12 np0005555520 python3.9[194568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:12 np0005555520 python3.9[194689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461191.919327-260-238065316320502/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:13 np0005555520 python3.9[194839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:14 np0005555520 python3.9[194960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461193.125762-260-222004737412964/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:14 np0005555520 podman[195084]: 2025-12-11 13:53:14.683231911 +0000 UTC m=+0.103891175 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 11 08:53:14 np0005555520 python3.9[195125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:15 np0005555520 python3.9[195257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461194.2897768-260-251066039480767/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:16 np0005555520 python3.9[195407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:16 np0005555520 python3.9[195530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461195.5763676-260-10944024044552/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:17 np0005555520 python3.9[195680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:17 np0005555520 python3.9[195801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461196.81526-260-139378521661679/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:18 np0005555520 python3.9[195951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:18 np0005555520 python3.9[196072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461198.004406-260-31149659657667/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:19 np0005555520 python3.9[196222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:20 np0005555520 python3.9[196343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461199.1414967-260-79136392296084/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:20 np0005555520 python3.9[196493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:21 np0005555520 python3.9[196614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461200.3151636-260-20231268418685/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:21 np0005555520 python3.9[196764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:22 np0005555520 python3.9[196885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461201.528371-260-127362515244564/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:23 np0005555520 python3.9[197035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:23 np0005555520 python3.9[197111]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:24 np0005555520 python3.9[197261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:24 np0005555520 python3.9[197337]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:25 np0005555520 python3.9[197487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:25 np0005555520 python3.9[197563]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:26 np0005555520 python3.9[197715]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:27 np0005555520 python3.9[197867]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:28 np0005555520 python3.9[198019]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:53:28 np0005555520 python3.9[198171]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:53:28 np0005555520 systemd[1]: Reloading.
Dec 11 08:53:29 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:53:29 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:53:29 np0005555520 systemd[1]: Listening on Podman API Socket.
Dec 11 08:53:30 np0005555520 python3.9[198362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:30 np0005555520 python3.9[198485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461209.5517402-482-56267773158930/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:53:31 np0005555520 python3.9[198561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:31 np0005555520 python3.9[198684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461209.5517402-482-56267773158930/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:53:32 np0005555520 python3.9[198836]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 11 08:53:33 np0005555520 python3.9[198988]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:53:34 np0005555520 python3[199140]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:53:34 np0005555520 podman[199176]: 2025-12-11 13:53:34.814865598 +0000 UTC m=+0.054568609 container create ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251210, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 08:53:34 np0005555520 podman[199176]: 2025-12-11 13:53:34.782502369 +0000 UTC m=+0.022205410 image pull 80890c1805dd88d2c8dac263b5abd3451d9e16dafe570d08a1aea1bc4a84ee52 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 11 08:53:34 np0005555520 python3[199140]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec 11 08:53:35 np0005555520 python3.9[199366]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:53:36 np0005555520 podman[199492]: 2025-12-11 13:53:36.240950088 +0000 UTC m=+0.069026356 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 08:53:36 np0005555520 python3.9[199537]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.280 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.281 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.281 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.282 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.282 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.282 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.282 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.283 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.284 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:53:37 np0005555520 python3.9[199691]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461216.5085852-546-211200552865844/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.311 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.311 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.311 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.312 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.485 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.486 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5992MB free_disk=72.59953689575195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.486 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.487 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.566 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.566 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.588 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.601 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.603 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 08:53:37 np0005555520 nova_compute[189440]: 2025-12-11 13:53:37.604 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:53:38 np0005555520 python3.9[199767]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:53:38 np0005555520 systemd[1]: Reloading.
Dec 11 08:53:38 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:53:38 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:53:39 np0005555520 python3.9[199879]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:53:39 np0005555520 systemd[1]: Reloading.
Dec 11 08:53:39 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:53:39 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:53:39 np0005555520 systemd[1]: Starting ceilometer_agent_compute container...
Dec 11 08:53:39 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:53:39 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:39 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:39 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:39 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:39 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.
Dec 11 08:53:39 np0005555520 podman[199919]: 2025-12-11 13:53:39.71009563 +0000 UTC m=+0.127626714 container init ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + sudo -E kolla_set_configs
Dec 11 08:53:39 np0005555520 podman[199919]: 2025-12-11 13:53:39.7448916 +0000 UTC m=+0.162422614 container start ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true)
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: sudo: unable to send audit message: Operation not permitted
Dec 11 08:53:39 np0005555520 podman[199919]: ceilometer_agent_compute
Dec 11 08:53:39 np0005555520 systemd[1]: Started ceilometer_agent_compute container.
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Validating config file
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Copying service configuration files
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: INFO:__main__:Writing out command to execute
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: ++ cat /run_command
Dec 11 08:53:39 np0005555520 podman[199941]: 2025-12-11 13:53:39.826656329 +0000 UTC m=+0.067261062 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + ARGS=
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + sudo kolla_copy_cacerts
Dec 11 08:53:39 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-27dbcc705a92abdb.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:53:39 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-27dbcc705a92abdb.service: Failed with result 'exit-code'.
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: sudo: unable to send audit message: Operation not permitted
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + [[ ! -n '' ]]
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + . kolla_extend_start
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + umask 0022
Dec 11 08:53:39 np0005555520 ceilometer_agent_compute[199934]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 11 08:53:40 np0005555520 python3.9[200117]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.738 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.738 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.738 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.738 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.738 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.739 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.740 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.741 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.742 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.743 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.744 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.745 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.746 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.747 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.748 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.749 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.750 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.751 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.772 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.773 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.773 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.773 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.774 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.774 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.774 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.774 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.774 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.775 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.776 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.777 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.778 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.779 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.780 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.781 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.785 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.787 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.789 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.791 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.834 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 11 08:53:40 np0005555520 podman[200119]: 2025-12-11 13:53:40.852079102 +0000 UTC m=+0.089412660 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.935 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.935 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 11 08:53:40 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.936 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:40.999 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.020 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.021 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.021 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.176 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.176 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.176 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.176 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.176 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.176 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.177 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.178 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.179 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.180 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.181 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.182 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.183 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.184 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.185 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.186 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.187 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.188 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.189 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.190 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.191 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.192 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[199934]: 2025-12-11 13:53:41.200 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 11 08:53:41 np0005555520 virtqemud[189338]: End of file while reading data: Input/output error
Dec 11 08:53:41 np0005555520 systemd[1]: libpod-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope: Deactivated successfully.
Dec 11 08:53:41 np0005555520 systemd[1]: libpod-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope: Consumed 1.707s CPU time.
Dec 11 08:53:41 np0005555520 podman[200131]: 2025-12-11 13:53:41.441094632 +0000 UTC m=+0.645815825 container died ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 08:53:41 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-27dbcc705a92abdb.timer: Deactivated successfully.
Dec 11 08:53:41 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.
Dec 11 08:53:41 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-userdata-shm.mount: Deactivated successfully.
Dec 11 08:53:41 np0005555520 systemd[1]: var-lib-containers-storage-overlay-f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845-merged.mount: Deactivated successfully.
Dec 11 08:53:41 np0005555520 podman[200131]: 2025-12-11 13:53:41.504032097 +0000 UTC m=+0.708753300 container cleanup ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 11 08:53:41 np0005555520 podman[200131]: ceilometer_agent_compute
Dec 11 08:53:41 np0005555520 podman[200175]: ceilometer_agent_compute
Dec 11 08:53:41 np0005555520 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 11 08:53:41 np0005555520 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 11 08:53:41 np0005555520 systemd[1]: Starting ceilometer_agent_compute container...
Dec 11 08:53:41 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:53:41 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:41 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:41 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:41 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f92611207073762f8cecaa5be162b0f8e05ee137feaf7ba4af337788fd9845/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:41 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.
Dec 11 08:53:41 np0005555520 podman[200187]: 2025-12-11 13:53:41.749340428 +0000 UTC m=+0.136974675 container init ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + sudo -E kolla_set_configs
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: sudo: unable to send audit message: Operation not permitted
Dec 11 08:53:41 np0005555520 podman[200187]: 2025-12-11 13:53:41.789881649 +0000 UTC m=+0.177515906 container start ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 08:53:41 np0005555520 podman[200187]: ceilometer_agent_compute
Dec 11 08:53:41 np0005555520 systemd[1]: Started ceilometer_agent_compute container.
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Validating config file
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Copying service configuration files
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: INFO:__main__:Writing out command to execute
Dec 11 08:53:41 np0005555520 podman[200210]: 2025-12-11 13:53:41.859987461 +0000 UTC m=+0.057833360 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: ++ cat /run_command
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + ARGS=
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + sudo kolla_copy_cacerts
Dec 11 08:53:41 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-172208df0d2ac6b2.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:53:41 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-172208df0d2ac6b2.service: Failed with result 'exit-code'.
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: sudo: unable to send audit message: Operation not permitted
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + [[ ! -n '' ]]
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + . kolla_extend_start
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + umask 0022
Dec 11 08:53:41 np0005555520 ceilometer_agent_compute[200203]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 11 08:53:42 np0005555520 python3.9[200386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.743 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.743 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.743 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.743 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.743 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.744 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.745 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.746 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.747 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.748 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.749 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.750 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.751 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.752 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.753 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.754 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.755 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.756 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.757 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.758 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.759 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.759 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.759 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.759 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.759 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.759 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.760 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.760 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.760 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.783 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.784 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.784 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.784 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.784 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.784 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.784 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.785 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.786 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.787 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.788 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.789 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.790 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.791 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.792 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.793 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.794 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.795 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.796 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.797 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.798 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.799 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.802 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.804 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.805 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.813 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.823 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.824 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.824 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.947 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.948 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.949 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.950 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.951 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.952 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.953 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.954 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.955 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.956 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.957 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.958 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.959 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.960 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.962 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.975 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.976 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.977 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:53:42.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:53:43 np0005555520 python3.9[200517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461222.0162084-578-277164339597677/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:53:43 np0005555520 python3.9[200674]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 11 08:53:44 np0005555520 python3.9[200826]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:53:45 np0005555520 auditd[704]: Audit daemon rotating log files
Dec 11 08:53:45 np0005555520 podman[200950]: 2025-12-11 13:53:45.41000232 +0000 UTC m=+0.150068928 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 11 08:53:45 np0005555520 python3[200998]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:53:45 np0005555520 podman[201040]: 2025-12-11 13:53:45.865502852 +0000 UTC m=+0.067018905 container create 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 08:53:45 np0005555520 podman[201040]: 2025-12-11 13:53:45.831250347 +0000 UTC m=+0.032766400 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 11 08:53:45 np0005555520 python3[200998]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 11 08:53:46 np0005555520 python3.9[201230]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:53:47 np0005555520 python3.9[201384]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:48 np0005555520 python3.9[201535]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461227.5483809-631-135792848975133/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:53:48 np0005555520 python3.9[201611]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:53:48 np0005555520 systemd[1]: Reloading.
Dec 11 08:53:48 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:53:48 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:53:49 np0005555520 python3.9[201722]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:53:49 np0005555520 systemd[1]: Reloading.
Dec 11 08:53:49 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:53:49 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:53:50 np0005555520 systemd[1]: Starting node_exporter container...
Dec 11 08:53:50 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:53:50 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c161a3eeceb5eb663bf1034c73a8d3b378aa98c230ec2e7e7773994c8a83571e/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:50 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c161a3eeceb5eb663bf1034c73a8d3b378aa98c230ec2e7e7773994c8a83571e/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:50 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.
Dec 11 08:53:50 np0005555520 podman[201762]: 2025-12-11 13:53:50.351344391 +0000 UTC m=+0.148641653 container init 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.375Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.375Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.375Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.376Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.376Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.377Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.377Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=arp
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=bcache
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=bonding
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=cpu
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=edac
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=filefd
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.378Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=netclass
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=netdev
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=netstat
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=nfs
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=nvme
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=softnet
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=systemd
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=xfs
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.379Z caller=node_exporter.go:117 level=info collector=zfs
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.380Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 11 08:53:50 np0005555520 node_exporter[201777]: ts=2025-12-11T13:53:50.381Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 11 08:53:50 np0005555520 podman[201762]: 2025-12-11 13:53:50.385447534 +0000 UTC m=+0.182744826 container start 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:53:50 np0005555520 podman[201762]: node_exporter
Dec 11 08:53:50 np0005555520 systemd[1]: Started node_exporter container.
Dec 11 08:53:50 np0005555520 podman[201787]: 2025-12-11 13:53:50.471700264 +0000 UTC m=+0.070626795 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 08:53:51 np0005555520 python3.9[201962]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:53:51 np0005555520 systemd[1]: Stopping node_exporter container...
Dec 11 08:53:51 np0005555520 systemd[1]: libpod-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope: Deactivated successfully.
Dec 11 08:53:51 np0005555520 podman[201966]: 2025-12-11 13:53:51.537044382 +0000 UTC m=+0.060099165 container died 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 08:53:51 np0005555520 systemd[1]: 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be-2e4e3e18d7005679.timer: Deactivated successfully.
Dec 11 08:53:51 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.
Dec 11 08:53:51 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be-userdata-shm.mount: Deactivated successfully.
Dec 11 08:53:51 np0005555520 systemd[1]: var-lib-containers-storage-overlay-c161a3eeceb5eb663bf1034c73a8d3b378aa98c230ec2e7e7773994c8a83571e-merged.mount: Deactivated successfully.
Dec 11 08:53:51 np0005555520 podman[201966]: 2025-12-11 13:53:51.577072062 +0000 UTC m=+0.100126835 container cleanup 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:53:51 np0005555520 podman[201966]: node_exporter
Dec 11 08:53:51 np0005555520 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 11 08:53:51 np0005555520 podman[201995]: node_exporter
Dec 11 08:53:51 np0005555520 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 11 08:53:51 np0005555520 systemd[1]: Stopped node_exporter container.
Dec 11 08:53:51 np0005555520 systemd[1]: Starting node_exporter container...
Dec 11 08:53:51 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:53:51 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c161a3eeceb5eb663bf1034c73a8d3b378aa98c230ec2e7e7773994c8a83571e/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:51 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c161a3eeceb5eb663bf1034c73a8d3b378aa98c230ec2e7e7773994c8a83571e/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:53:51 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.
Dec 11 08:53:51 np0005555520 podman[202008]: 2025-12-11 13:53:51.820002923 +0000 UTC m=+0.149499895 container init 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.834Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.834Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.834Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.835Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.835Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.835Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.835Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.835Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.835Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=arp
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=bcache
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=bonding
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=cpu
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=edac
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=filefd
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=netclass
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=netdev
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=netstat
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=nfs
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=nvme
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=softnet
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=systemd
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=xfs
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=node_exporter.go:117 level=info collector=zfs
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.836Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 11 08:53:51 np0005555520 node_exporter[202024]: ts=2025-12-11T13:53:51.837Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 11 08:53:51 np0005555520 podman[202008]: 2025-12-11 13:53:51.859909939 +0000 UTC m=+0.189406921 container start 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:53:51 np0005555520 podman[202008]: node_exporter
Dec 11 08:53:51 np0005555520 systemd[1]: Started node_exporter container.
Dec 11 08:53:51 np0005555520 podman[202033]: 2025-12-11 13:53:51.946321733 +0000 UTC m=+0.064167805 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 08:53:52 np0005555520 python3.9[202207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:53:53 np0005555520 python3.9[202330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461232.0873876-663-204343365792600/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:53:54 np0005555520 python3.9[202482]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 11 08:53:54 np0005555520 python3.9[202634]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:53:55 np0005555520 python3[202786]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:53:57 np0005555520 podman[202801]: 2025-12-11 13:53:57.853411521 +0000 UTC m=+2.104692134 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 11 08:53:57 np0005555520 podman[202897]: 2025-12-11 13:53:57.992716793 +0000 UTC m=+0.052607761 container create 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter)
Dec 11 08:53:57 np0005555520 podman[202897]: 2025-12-11 13:53:57.962899757 +0000 UTC m=+0.022790705 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 11 08:53:57 np0005555520 python3[202786]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 11 08:53:59 np0005555520 python3.9[203087]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:53:59 np0005555520 python3.9[203241]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:00 np0005555520 python3.9[203392]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461240.081911-716-146991743981268/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:01 np0005555520 python3.9[203468]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:54:01 np0005555520 systemd[1]: Reloading.
Dec 11 08:54:01 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:54:01 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:54:02 np0005555520 python3.9[203578]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:54:02 np0005555520 systemd[1]: Reloading.
Dec 11 08:54:02 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:54:02 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:54:02 np0005555520 systemd[1]: Starting podman_exporter container...
Dec 11 08:54:02 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:54:02 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40ff34ced5eb78626e03397cd0aa3c17582a60107bbebc649fc986164ee6e8f/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:02 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40ff34ced5eb78626e03397cd0aa3c17582a60107bbebc649fc986164ee6e8f/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:02 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.
Dec 11 08:54:02 np0005555520 podman[203617]: 2025-12-11 13:54:02.931580722 +0000 UTC m=+0.154548828 container init 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:54:02 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:02.952Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 11 08:54:02 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:02.952Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 11 08:54:02 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:02.952Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 11 08:54:02 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:02.952Z caller=handler.go:105 level=info collector=container
Dec 11 08:54:02 np0005555520 podman[203617]: 2025-12-11 13:54:02.966496625 +0000 UTC m=+0.189464691 container start 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:54:02 np0005555520 podman[203617]: podman_exporter
Dec 11 08:54:02 np0005555520 systemd[1]: Starting Podman API Service...
Dec 11 08:54:02 np0005555520 systemd[1]: Started podman_exporter container.
Dec 11 08:54:02 np0005555520 systemd[1]: Started Podman API Service.
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="Setting parallel job count to 25"
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="Using sqlite as database backend"
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 11 08:54:03 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:54:03 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 11 08:54:03 np0005555520 podman[203650]: time="2025-12-11T13:54:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:54:03 np0005555520 podman[203642]: 2025-12-11 13:54:03.089435002 +0000 UTC m=+0.114826018 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 08:54:03 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:54:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19588 "" "Go-http-client/1.1"
Dec 11 08:54:03 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:03.095Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 11 08:54:03 np0005555520 systemd[1]: 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f-804f7e887891b3c.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:54:03 np0005555520 systemd[1]: 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f-804f7e887891b3c.service: Failed with result 'exit-code'.
Dec 11 08:54:03 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:03.096Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 11 08:54:03 np0005555520 podman_exporter[203633]: ts=2025-12-11T13:54:03.097Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 11 08:54:03 np0005555520 python3.9[203830]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:54:03 np0005555520 systemd[1]: Stopping podman_exporter container...
Dec 11 08:54:03 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:54:03 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec 11 08:54:03 np0005555520 systemd[1]: libpod-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope: Deactivated successfully.
Dec 11 08:54:03 np0005555520 podman[203834]: 2025-12-11 13:54:03.920439291 +0000 UTC m=+0.047883764 container died 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 08:54:03 np0005555520 systemd[1]: 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f-804f7e887891b3c.timer: Deactivated successfully.
Dec 11 08:54:03 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.
Dec 11 08:54:03 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f-userdata-shm.mount: Deactivated successfully.
Dec 11 08:54:03 np0005555520 systemd[1]: var-lib-containers-storage-overlay-b40ff34ced5eb78626e03397cd0aa3c17582a60107bbebc649fc986164ee6e8f-merged.mount: Deactivated successfully.
Dec 11 08:54:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:54:04.063 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:54:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:54:04.064 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:54:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:54:04.064 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:54:04 np0005555520 podman[203834]: 2025-12-11 13:54:04.308754194 +0000 UTC m=+0.436198677 container cleanup 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:54:04 np0005555520 podman[203834]: podman_exporter
Dec 11 08:54:04 np0005555520 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 11 08:54:04 np0005555520 podman[203863]: podman_exporter
Dec 11 08:54:04 np0005555520 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 11 08:54:04 np0005555520 systemd[1]: Stopped podman_exporter container.
Dec 11 08:54:04 np0005555520 systemd[1]: Starting podman_exporter container...
Dec 11 08:54:04 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:54:04 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40ff34ced5eb78626e03397cd0aa3c17582a60107bbebc649fc986164ee6e8f/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:04 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40ff34ced5eb78626e03397cd0aa3c17582a60107bbebc649fc986164ee6e8f/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:04 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.
Dec 11 08:54:04 np0005555520 podman[203876]: 2025-12-11 13:54:04.546727913 +0000 UTC m=+0.130437043 container init 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.567Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.567Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.567Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.567Z caller=handler.go:105 level=info collector=container
Dec 11 08:54:04 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:54:04 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 11 08:54:04 np0005555520 podman[203650]: time="2025-12-11T13:54:04Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:54:04 np0005555520 podman[203876]: 2025-12-11 13:54:04.577870102 +0000 UTC m=+0.161579192 container start 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 08:54:04 np0005555520 podman[203876]: podman_exporter
Dec 11 08:54:04 np0005555520 systemd[1]: Started podman_exporter container.
Dec 11 08:54:04 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:54:04 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19590 "" "Go-http-client/1.1"
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.595Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.596Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 11 08:54:04 np0005555520 podman_exporter[203892]: ts=2025-12-11T13:54:04.597Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 11 08:54:04 np0005555520 podman[203902]: 2025-12-11 13:54:04.673516175 +0000 UTC m=+0.080000828 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:54:05 np0005555520 python3.9[204079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:05 np0005555520 python3.9[204202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461244.8168926-748-68828981834209/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:54:06 np0005555520 podman[204302]: 2025-12-11 13:54:06.469929283 +0000 UTC m=+0.058393873 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 08:54:06 np0005555520 python3.9[204374]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 11 08:54:07 np0005555520 python3.9[204526]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:54:08 np0005555520 python3[204678]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:54:11 np0005555520 podman[204737]: 2025-12-11 13:54:11.653810996 +0000 UTC m=+0.237266923 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 11 08:54:11 np0005555520 podman[204693]: 2025-12-11 13:54:11.914094426 +0000 UTC m=+3.538915827 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 11 08:54:12 np0005555520 podman[204809]: 2025-12-11 13:54:12.051021128 +0000 UTC m=+0.049676808 container create 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Dec 11 08:54:12 np0005555520 podman[204809]: 2025-12-11 13:54:12.023510939 +0000 UTC m=+0.022166639 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 11 08:54:12 np0005555520 python3[204678]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 11 08:54:12 np0005555520 podman[204901]: 2025-12-11 13:54:12.450376934 +0000 UTC m=+0.048199562 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Dec 11 08:54:12 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-172208df0d2ac6b2.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:54:12 np0005555520 systemd[1]: ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3-172208df0d2ac6b2.service: Failed with result 'exit-code'.
Dec 11 08:54:12 np0005555520 python3.9[205018]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:54:13 np0005555520 python3.9[205172]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:14 np0005555520 python3.9[205323]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461253.7726061-801-123419173054061/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:15 np0005555520 python3.9[205399]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:54:15 np0005555520 systemd[1]: Reloading.
Dec 11 08:54:15 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:54:15 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:54:15 np0005555520 podman[205434]: 2025-12-11 13:54:15.784188183 +0000 UTC m=+0.093069881 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 08:54:16 np0005555520 python3.9[205536]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:54:16 np0005555520 systemd[1]: Reloading.
Dec 11 08:54:16 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:54:16 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:54:16 np0005555520 systemd[1]: Starting openstack_network_exporter container...
Dec 11 08:54:17 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:54:17 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:17 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:17 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:17 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.
Dec 11 08:54:17 np0005555520 podman[205575]: 2025-12-11 13:54:17.041162154 +0000 UTC m=+0.208106491 container init 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *bridge.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *coverage.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *datapath.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *iface.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *memory.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *ovnnorthd.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *ovn.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *ovsdbserver.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *pmd_perf.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *pmd_rxq.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: INFO    13:54:17 main.go:48: registering *vswitch.Collector
Dec 11 08:54:17 np0005555520 openstack_network_exporter[205590]: NOTICE  13:54:17 main.go:76: listening on https://:9105/metrics
Dec 11 08:54:17 np0005555520 podman[205575]: 2025-12-11 13:54:17.066476721 +0000 UTC m=+0.233421048 container start 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 08:54:17 np0005555520 podman[205575]: openstack_network_exporter
Dec 11 08:54:17 np0005555520 systemd[1]: Started openstack_network_exporter container.
Dec 11 08:54:17 np0005555520 podman[205600]: 2025-12-11 13:54:17.153567152 +0000 UTC m=+0.074461890 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 08:54:17 np0005555520 python3.9[205774]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:54:17 np0005555520 systemd[1]: Stopping openstack_network_exporter container...
Dec 11 08:54:18 np0005555520 systemd[1]: libpod-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope: Deactivated successfully.
Dec 11 08:54:18 np0005555520 podman[205778]: 2025-12-11 13:54:18.229227935 +0000 UTC m=+0.263877400 container died 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 08:54:18 np0005555520 systemd[1]: 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73-e7d53f4290ffb57.timer: Deactivated successfully.
Dec 11 08:54:18 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.
Dec 11 08:54:18 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73-userdata-shm.mount: Deactivated successfully.
Dec 11 08:54:18 np0005555520 systemd[1]: var-lib-containers-storage-overlay-68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867-merged.mount: Deactivated successfully.
Dec 11 08:54:19 np0005555520 podman[205778]: 2025-12-11 13:54:19.262186243 +0000 UTC m=+1.296835718 container cleanup 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 11 08:54:19 np0005555520 podman[205778]: openstack_network_exporter
Dec 11 08:54:19 np0005555520 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 11 08:54:19 np0005555520 podman[205805]: openstack_network_exporter
Dec 11 08:54:19 np0005555520 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 11 08:54:19 np0005555520 systemd[1]: Stopped openstack_network_exporter container.
Dec 11 08:54:19 np0005555520 systemd[1]: Starting openstack_network_exporter container...
Dec 11 08:54:19 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:54:19 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:19 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:19 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68bdf4046bbd2fe00a71a2e450aefb0367268850b86997caba07be2901197867/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:54:19 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.
Dec 11 08:54:19 np0005555520 podman[205818]: 2025-12-11 13:54:19.529242781 +0000 UTC m=+0.141344613 container init 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter)
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *bridge.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *coverage.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *datapath.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *iface.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *memory.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *ovnnorthd.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *ovn.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *ovsdbserver.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *pmd_perf.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *pmd_rxq.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: INFO    13:54:19 main.go:48: registering *vswitch.Collector
Dec 11 08:54:19 np0005555520 openstack_network_exporter[205834]: NOTICE  13:54:19 main.go:76: listening on https://:9105/metrics
Dec 11 08:54:19 np0005555520 podman[205818]: 2025-12-11 13:54:19.570229623 +0000 UTC m=+0.182331495 container start 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 11 08:54:19 np0005555520 podman[205818]: openstack_network_exporter
Dec 11 08:54:19 np0005555520 systemd[1]: Started openstack_network_exporter container.
Dec 11 08:54:19 np0005555520 podman[205844]: 2025-12-11 13:54:19.658243438 +0000 UTC m=+0.076740007 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git)
Dec 11 08:54:20 np0005555520 python3.9[206016]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:54:21 np0005555520 python3.9[206168]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 11 08:54:22 np0005555520 podman[206305]: 2025-12-11 13:54:22.349709282 +0000 UTC m=+0.063337805 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:54:22 np0005555520 python3.9[206350]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:22 np0005555520 systemd[1]: Started libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope.
Dec 11 08:54:22 np0005555520 podman[206358]: 2025-12-11 13:54:22.649937364 +0000 UTC m=+0.092809341 container exec 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:54:22 np0005555520 podman[206358]: 2025-12-11 13:54:22.686697979 +0000 UTC m=+0.129569986 container exec_died 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 08:54:22 np0005555520 systemd[1]: libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope: Deactivated successfully.
Dec 11 08:54:23 np0005555520 python3.9[206542]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:23 np0005555520 systemd[1]: Started libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope.
Dec 11 08:54:23 np0005555520 podman[206543]: 2025-12-11 13:54:23.468577369 +0000 UTC m=+0.071158815 container exec 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 11 08:54:23 np0005555520 podman[206543]: 2025-12-11 13:54:23.503223525 +0000 UTC m=+0.105804971 container exec_died 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:54:23 np0005555520 systemd[1]: libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope: Deactivated successfully.
Dec 11 08:54:24 np0005555520 python3.9[206727]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:24 np0005555520 python3.9[206879]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 11 08:54:25 np0005555520 python3.9[207044]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:25 np0005555520 systemd[1]: Started libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope.
Dec 11 08:54:25 np0005555520 podman[207045]: 2025-12-11 13:54:25.915029475 +0000 UTC m=+0.098668646 container exec 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 11 08:54:25 np0005555520 podman[207045]: 2025-12-11 13:54:25.924770748 +0000 UTC m=+0.108409939 container exec_died 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 08:54:25 np0005555520 systemd[1]: libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope: Deactivated successfully.
Dec 11 08:54:26 np0005555520 python3.9[207229]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:26 np0005555520 systemd[1]: Started libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope.
Dec 11 08:54:26 np0005555520 podman[207230]: 2025-12-11 13:54:26.824981406 +0000 UTC m=+0.068608996 container exec 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:54:26 np0005555520 podman[207230]: 2025-12-11 13:54:26.861188637 +0000 UTC m=+0.104816197 container exec_died 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:54:26 np0005555520 systemd[1]: libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope: Deactivated successfully.
Dec 11 08:54:27 np0005555520 python3.9[207413]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:28 np0005555520 python3.9[207565]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 11 08:54:29 np0005555520 python3.9[207730]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:29 np0005555520 systemd[1]: Started libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope.
Dec 11 08:54:29 np0005555520 podman[207731]: 2025-12-11 13:54:29.2670233 +0000 UTC m=+0.111133422 container exec 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 08:54:29 np0005555520 podman[207731]: 2025-12-11 13:54:29.301461941 +0000 UTC m=+0.145572073 container exec_died 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:54:29 np0005555520 systemd[1]: libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope: Deactivated successfully.
Dec 11 08:54:29 np0005555520 python3.9[207914]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:30 np0005555520 systemd[1]: Started libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope.
Dec 11 08:54:30 np0005555520 podman[207915]: 2025-12-11 13:54:30.079176626 +0000 UTC m=+0.073016418 container exec 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 08:54:30 np0005555520 podman[207915]: 2025-12-11 13:54:30.112288076 +0000 UTC m=+0.106127848 container exec_died 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 11 08:54:30 np0005555520 systemd[1]: libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope: Deactivated successfully.
Dec 11 08:54:30 np0005555520 python3.9[208098]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:31 np0005555520 python3.9[208250]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 11 08:54:32 np0005555520 python3.9[208415]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:32 np0005555520 systemd[1]: Started libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope.
Dec 11 08:54:32 np0005555520 podman[208416]: 2025-12-11 13:54:32.551935126 +0000 UTC m=+0.087106621 container exec ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:54:32 np0005555520 podman[208416]: 2025-12-11 13:54:32.58736987 +0000 UTC m=+0.122541365 container exec_died ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 08:54:32 np0005555520 systemd[1]: libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope: Deactivated successfully.
Dec 11 08:54:33 np0005555520 python3.9[208597]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:33 np0005555520 systemd[1]: Started libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope.
Dec 11 08:54:33 np0005555520 podman[208598]: 2025-12-11 13:54:33.415730727 +0000 UTC m=+0.066360675 container exec ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 08:54:33 np0005555520 podman[208598]: 2025-12-11 13:54:33.447190709 +0000 UTC m=+0.097820657 container exec_died ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 08:54:33 np0005555520 systemd[1]: libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope: Deactivated successfully.
Dec 11 08:54:34 np0005555520 python3.9[208783]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:35 np0005555520 podman[208907]: 2025-12-11 13:54:35.041227455 +0000 UTC m=+0.078715838 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 08:54:35 np0005555520 python3.9[208952]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 11 08:54:36 np0005555520 python3.9[209124]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:36 np0005555520 systemd[1]: Started libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope.
Dec 11 08:54:36 np0005555520 podman[209125]: 2025-12-11 13:54:36.137285428 +0000 UTC m=+0.101248505 container exec 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 08:54:36 np0005555520 podman[209125]: 2025-12-11 13:54:36.170169244 +0000 UTC m=+0.134132311 container exec_died 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 08:54:36 np0005555520 systemd[1]: libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope: Deactivated successfully.
Dec 11 08:54:36 np0005555520 podman[209281]: 2025-12-11 13:54:36.774446046 +0000 UTC m=+0.091071012 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 11 08:54:36 np0005555520 python3.9[209328]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:37 np0005555520 systemd[1]: Started libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope.
Dec 11 08:54:37 np0005555520 podman[209330]: 2025-12-11 13:54:37.075681332 +0000 UTC m=+0.094912129 container exec 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:54:37 np0005555520 podman[209330]: 2025-12-11 13:54:37.108120997 +0000 UTC m=+0.127351804 container exec_died 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 08:54:37 np0005555520 systemd[1]: libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope: Deactivated successfully.
Dec 11 08:54:37 np0005555520 nova_compute[189440]: 2025-12-11 13:54:37.597 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:37 np0005555520 nova_compute[189440]: 2025-12-11 13:54:37.598 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:37 np0005555520 python3.9[209511]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.249 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.250 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.251 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.251 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.251 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:38 np0005555520 nova_compute[189440]: 2025-12-11 13:54:38.251 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 08:54:38 np0005555520 python3.9[209663]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.273 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.274 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.274 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.275 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.446 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.447 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5814MB free_disk=72.43044662475586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.447 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.448 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.501 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.502 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.529 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.542 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.544 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 08:54:39 np0005555520 nova_compute[189440]: 2025-12-11 13:54:39.544 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:54:39 np0005555520 python3.9[209828]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:39 np0005555520 systemd[1]: Started libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope.
Dec 11 08:54:39 np0005555520 podman[209829]: 2025-12-11 13:54:39.684916155 +0000 UTC m=+0.106255470 container exec 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:54:39 np0005555520 podman[209829]: 2025-12-11 13:54:39.71605911 +0000 UTC m=+0.137398455 container exec_died 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:54:39 np0005555520 systemd[1]: libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope: Deactivated successfully.
Dec 11 08:54:40 np0005555520 python3.9[210013]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:40 np0005555520 systemd[1]: Started libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope.
Dec 11 08:54:40 np0005555520 podman[210014]: 2025-12-11 13:54:40.836642387 +0000 UTC m=+0.088046083 container exec 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:54:40 np0005555520 podman[210014]: 2025-12-11 13:54:40.870267579 +0000 UTC m=+0.121671205 container exec_died 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 08:54:40 np0005555520 systemd[1]: libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope: Deactivated successfully.
Dec 11 08:54:41 np0005555520 python3.9[210197]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:42 np0005555520 podman[210321]: 2025-12-11 13:54:42.342836305 +0000 UTC m=+0.080447127 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:54:42 np0005555520 python3.9[210368]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 11 08:54:43 np0005555520 podman[210505]: 2025-12-11 13:54:43.267677168 +0000 UTC m=+0.095628176 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 11 08:54:43 np0005555520 python3.9[210551]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:43 np0005555520 systemd[1]: Started libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope.
Dec 11 08:54:43 np0005555520 podman[210554]: 2025-12-11 13:54:43.558552487 +0000 UTC m=+0.086399935 container exec 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64)
Dec 11 08:54:43 np0005555520 podman[210554]: 2025-12-11 13:54:43.593208511 +0000 UTC m=+0.121055939 container exec_died 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, config_id=edpm, vcs-type=git, io.openshift.expose-services=)
Dec 11 08:54:43 np0005555520 systemd[1]: libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope: Deactivated successfully.
Dec 11 08:54:44 np0005555520 python3.9[210738]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:54:44 np0005555520 systemd[1]: Started libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope.
Dec 11 08:54:44 np0005555520 podman[210739]: 2025-12-11 13:54:44.508563707 +0000 UTC m=+0.098567294 container exec 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public)
Dec 11 08:54:44 np0005555520 podman[210739]: 2025-12-11 13:54:44.545349991 +0000 UTC m=+0.135353578 container exec_died 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Dec 11 08:54:44 np0005555520 systemd[1]: libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope: Deactivated successfully.
Dec 11 08:54:45 np0005555520 python3.9[210920]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:46 np0005555520 podman[211044]: 2025-12-11 13:54:46.11163453 +0000 UTC m=+0.107026318 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 08:54:46 np0005555520 python3.9[211092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:47 np0005555520 python3.9[211251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:47 np0005555520 python3.9[211374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461286.4541104-1082-116676920240405/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:48 np0005555520 python3.9[211526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:49 np0005555520 python3.9[211678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:49 np0005555520 python3.9[211756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:50 np0005555520 podman[211882]: 2025-12-11 13:54:50.28270755 +0000 UTC m=+0.062127687 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, release=1755695350)
Dec 11 08:54:50 np0005555520 python3.9[211929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:50 np0005555520 python3.9[212007]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i0qc985f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:51 np0005555520 python3.9[212159]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:52 np0005555520 python3.9[212237]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:52 np0005555520 podman[212238]: 2025-12-11 13:54:52.52938701 +0000 UTC m=+0.076044098 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 08:54:53 np0005555520 python3.9[212414]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:54:54 np0005555520 python3[212567]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 08:54:54 np0005555520 python3.9[212719]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:55 np0005555520 python3.9[212797]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:56 np0005555520 python3.9[212949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:56 np0005555520 python3.9[213027]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:57 np0005555520 python3.9[213179]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:57 np0005555520 python3.9[213257]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:58 np0005555520 python3.9[213409]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:54:58 np0005555520 python3.9[213487]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:54:59 np0005555520 python3.9[213639]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:00 np0005555520 python3.9[213764]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765461298.9721859-1207-47460795711104/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:00 np0005555520 python3.9[213916]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:01 np0005555520 python3.9[214068]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:02 np0005555520 python3.9[214223]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:03 np0005555520 python3.9[214375]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:03 np0005555520 python3.9[214528]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:55:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:55:04.064 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:55:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:55:04.065 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:55:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:55:04.065 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:55:04 np0005555520 python3.9[214682]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:05 np0005555520 podman[214838]: 2025-12-11 13:55:05.138701394 +0000 UTC m=+0.051981354 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 08:55:05 np0005555520 python3.9[214839]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:05 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:05 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:05 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:05 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:05 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 08:55:05 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:55:05 np0005555520 systemd[1]: session-25.scope: Deactivated successfully.
Dec 11 08:55:05 np0005555520 systemd[1]: session-25.scope: Consumed 1min 48.067s CPU time.
Dec 11 08:55:05 np0005555520 systemd-logind[786]: Session 25 logged out. Waiting for processes to exit.
Dec 11 08:55:05 np0005555520 systemd-logind[786]: Removed session 25.
Dec 11 08:55:06 np0005555520 podman[203650]: time="2025-12-11T13:55:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:55:06 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:55:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec 11 08:55:06 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:55:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3421 "" "Go-http-client/1.1"
Dec 11 08:55:07 np0005555520 podman[214897]: 2025-12-11 13:55:07.457425149 +0000 UTC m=+0.056958689 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 11 08:55:12 np0005555520 podman[214917]: 2025-12-11 13:55:12.462585736 +0000 UTC m=+0.067909390 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:55:13 np0005555520 systemd-logind[786]: New session 26 of user zuul.
Dec 11 08:55:13 np0005555520 systemd[1]: Started Session 26 of User zuul.
Dec 11 08:55:13 np0005555520 podman[214939]: 2025-12-11 13:55:13.382892745 +0000 UTC m=+0.090382626 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.build-date=20251210, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 11 08:55:14 np0005555520 python3.9[215111]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:55:14 np0005555520 systemd[1]: Reloading.
Dec 11 08:55:14 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:55:14 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:55:15 np0005555520 python3.9[215297]: ansible-ansible.builtin.service_facts Invoked
Dec 11 08:55:15 np0005555520 network[215314]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 08:55:15 np0005555520 network[215315]: 'network-scripts' will be removed from distribution in near future.
Dec 11 08:55:15 np0005555520 network[215316]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 08:55:16 np0005555520 podman[215322]: 2025-12-11 13:55:16.71920062 +0000 UTC m=+0.145165743 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 11 08:55:20 np0005555520 podman[215511]: 2025-12-11 13:55:20.457923385 +0000 UTC m=+0.059132100 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 11 08:55:21 np0005555520 python3.9[215636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:55:22 np0005555520 python3.9[215789]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:22 np0005555520 podman[215941]: 2025-12-11 13:55:22.71314062 +0000 UTC m=+0.068093745 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 08:55:22 np0005555520 python3.9[215942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:23 np0005555520 python3.9[216119]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:24 np0005555520 python3.9[216271]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:55:25 np0005555520 python3.9[216423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:55:25 np0005555520 systemd[1]: Reloading.
Dec 11 08:55:25 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:55:25 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:55:26 np0005555520 python3.9[216609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:27 np0005555520 python3.9[216762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:55:28 np0005555520 python3.9[216912]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:55:29 np0005555520 python3.9[217064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:29 np0005555520 podman[203650]: time="2025-12-11T13:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:55:29 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec 11 08:55:29 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3429 "" "Go-http-client/1.1"
Dec 11 08:55:29 np0005555520 python3.9[217186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461328.7507417-125-243924207960625/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:55:30 np0005555520 python3.9[217338]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 08:55:31 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:55:32 np0005555520 python3.9[217489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:32 np0005555520 python3.9[217610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461331.7142298-171-248883244307463/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:33 np0005555520 python3.9[217760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:34 np0005555520 python3.9[217881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461332.9913058-171-212512270366758/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:34 np0005555520 python3.9[218031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:35 np0005555520 python3.9[218152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461334.3093886-171-139557308016892/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:35 np0005555520 podman[218153]: 2025-12-11 13:55:35.498908094 +0000 UTC m=+0.081935654 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:55:36 np0005555520 python3.9[218325]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:55:36 np0005555520 python3.9[218478]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:55:37 np0005555520 python3.9[218631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:37 np0005555520 podman[218726]: 2025-12-11 13:55:37.995176611 +0000 UTC m=+0.084305802 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 11 08:55:38 np0005555520 python3.9[218764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461337.051568-230-36179180551137/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.540 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.540 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.540 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.541 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.562 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.563 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:38 np0005555520 nova_compute[189440]: 2025-12-11 13:55:38.564 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 08:55:38 np0005555520 python3.9[218922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:39 np0005555520 nova_compute[189440]: 2025-12-11 13:55:39.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:39 np0005555520 python3.9[218998]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:40 np0005555520 python3.9[219148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.266 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.266 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.266 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.266 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.429 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.430 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5791MB free_disk=72.42748260498047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.430 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.431 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.499 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.500 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.520 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.539 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.540 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 08:55:40 np0005555520 nova_compute[189440]: 2025-12-11 13:55:40.541 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:55:40 np0005555520 python3.9[219269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461339.6835728-230-162795020091368/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:41 np0005555520 python3.9[219419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:41 np0005555520 nova_compute[189440]: 2025-12-11 13:55:41.541 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:55:41 np0005555520 python3.9[219540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461340.9042478-230-47657500392656/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:42 np0005555520 python3.9[219690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.976 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.977 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9cb6dfa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:42 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:42.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:43.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:43.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:43.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:43.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 ceilometer_agent_compute[200203]: 2025-12-11 13:55:43.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 08:55:43 np0005555520 podman[219786]: 2025-12-11 13:55:43.114753235 +0000 UTC m=+0.099075825 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:55:43 np0005555520 python3.9[219822]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461342.1280284-230-92442875148427/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:43 np0005555520 podman[219955]: 2025-12-11 13:55:43.821791158 +0000 UTC m=+0.061613055 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 08:55:43 np0005555520 python3.9[219994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:44 np0005555520 python3.9[220122]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461343.4570255-230-197484672835882/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:45 np0005555520 python3.9[220272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:46 np0005555520 python3.9[220348]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:46 np0005555520 python3.9[220500]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:46 np0005555520 podman[220501]: 2025-12-11 13:55:46.97674421 +0000 UTC m=+0.084917648 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 11 08:55:47 np0005555520 python3.9[220676]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:48 np0005555520 python3.9[220828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:55:48 np0005555520 python3.9[220980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:49 np0005555520 python3.9[221103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461348.444345-349-235221513723062/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:55:49 np0005555520 python3.9[221179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:50 np0005555520 python3.9[221302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461348.444345-349-235221513723062/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:55:50 np0005555520 podman[221303]: 2025-12-11 13:55:50.688680296 +0000 UTC m=+0.085459030 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=)
Dec 11 08:55:51 np0005555520 python3.9[221476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:55:51 np0005555520 python3.9[221599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765461350.7895262-349-216833488001697/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 08:55:52 np0005555520 python3.9[221751]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 11 08:55:53 np0005555520 podman[221855]: 2025-12-11 13:55:53.478941827 +0000 UTC m=+0.072005991 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 08:55:53 np0005555520 python3.9[221927]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:55:54 np0005555520 python3[222079]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:55:55 np0005555520 podman[222117]: 2025-12-11 13:55:55.085079662 +0000 UTC m=+0.054128372 container create a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Dec 11 08:55:55 np0005555520 podman[222117]: 2025-12-11 13:55:55.054369997 +0000 UTC m=+0.023418727 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 11 08:55:55 np0005555520 python3[222079]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec 11 08:55:56 np0005555520 python3.9[222307]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:55:57 np0005555520 python3.9[222461]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:57 np0005555520 python3.9[222612]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461357.1908317-427-233692167456094/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:55:58 np0005555520 python3.9[222688]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:55:58 np0005555520 systemd[1]: Reloading.
Dec 11 08:55:59 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:55:59 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:55:59 np0005555520 podman[203650]: time="2025-12-11T13:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:55:59 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25323 "" "Go-http-client/1.1"
Dec 11 08:55:59 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3430 "" "Go-http-client/1.1"
Dec 11 08:55:59 np0005555520 python3.9[222800]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:56:00 np0005555520 systemd[1]: Reloading.
Dec 11 08:56:00 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:56:00 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:56:00 np0005555520 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 11 08:56:00 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:56:00 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:00 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:00 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:00 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:00 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.
Dec 11 08:56:00 np0005555520 podman[222840]: 2025-12-11 13:56:00.650096471 +0000 UTC m=+0.305081817 container init a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + sudo -E kolla_set_configs
Dec 11 08:56:00 np0005555520 podman[222840]: 2025-12-11 13:56:00.687792107 +0000 UTC m=+0.342777453 container start a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:56:00 np0005555520 podman[222840]: ceilometer_agent_ipmi
Dec 11 08:56:00 np0005555520 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Validating config file
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Copying service configuration files
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: INFO:__main__:Writing out command to execute
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: ++ cat /run_command
Dec 11 08:56:00 np0005555520 podman[222862]: 2025-12-11 13:56:00.757650233 +0000 UTC m=+0.059252966 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + ARGS=
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + sudo kolla_copy_cacerts
Dec 11 08:56:00 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-26df6449b87e4d69.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:56:00 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-26df6449b87e4d69.service: Failed with result 'exit-code'.
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + [[ ! -n '' ]]
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + . kolla_extend_start
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + umask 0022
Dec 11 08:56:00 np0005555520 ceilometer_agent_ipmi[222855]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 08:56:01 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:56:01 np0005555520 python3.9[223038]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.659 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.660 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.660 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.660 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.660 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.660 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.661 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.661 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.661 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.661 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.662 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.662 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.662 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.662 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.662 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.663 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.663 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.663 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.663 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.663 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.664 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.664 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.664 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.664 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.665 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.665 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.665 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.665 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.665 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.666 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.666 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.666 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.666 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.666 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.666 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.667 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.667 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.667 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.667 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.667 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.667 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.668 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.668 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.668 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.668 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.668 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.669 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.669 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.669 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.669 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.669 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.670 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.670 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.670 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.670 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.670 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.670 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.671 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.671 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.671 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.671 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.671 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.671 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.672 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.672 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.673 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.674 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.675 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.676 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.677 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.678 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.679 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.680 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.681 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.682 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.683 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.683 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.702 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.703 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.704 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 11 08:56:01 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:01.786 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpl9e7nrje/privsep.sock']
Dec 11 08:56:02 np0005555520 python3.9[223198]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.471 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.472 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpl9e7nrje/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.364 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.368 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.369 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.370 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.589 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.590 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.591 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.592 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.592 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.592 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.592 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.592 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.593 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.593 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.593 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.593 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.594 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.598 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.598 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.598 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.599 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.599 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.599 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.599 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.599 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.600 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.600 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.600 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.600 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.600 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.601 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.601 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.601 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.601 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.601 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.601 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.602 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.602 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.602 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.602 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.602 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.602 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.603 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.604 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.605 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.606 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.606 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.606 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.606 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.606 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.606 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.607 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.608 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.608 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.608 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.608 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.608 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.608 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.609 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.609 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.609 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.609 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.609 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.609 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.611 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.612 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.612 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.612 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.612 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.612 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.612 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.613 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.614 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.614 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.614 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.614 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.614 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.614 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.615 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.615 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.615 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.615 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.615 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.615 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.616 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.617 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.618 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.619 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.620 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.621 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.622 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.623 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.624 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.625 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.626 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.626 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 11 08:56:02 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:02.628 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 11 08:56:03 np0005555520 python3[223355]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 11 08:56:03 np0005555520 podman[223392]: 2025-12-11 13:56:03.645950313 +0000 UTC m=+0.054543802 container create 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Dec 11 08:56:03 np0005555520 podman[223392]: 2025-12-11 13:56:03.614629493 +0000 UTC m=+0.023223022 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 11 08:56:03 np0005555520 python3[223355]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 11 08:56:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:56:04.065 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:56:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:56:04.067 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:56:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:56:04.068 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:56:04 np0005555520 python3.9[223582]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:56:05 np0005555520 python3.9[223736]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:05 np0005555520 podman[223859]: 2025-12-11 13:56:05.840157057 +0000 UTC m=+0.091705864 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:56:06 np0005555520 python3.9[223911]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765461365.3298173-489-120744258360093/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:06 np0005555520 python3.9[223988]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:56:06 np0005555520 systemd[1]: Reloading.
Dec 11 08:56:06 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:56:06 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:56:07 np0005555520 python3.9[224100]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 08:56:08 np0005555520 podman[224103]: 2025-12-11 13:56:08.533648039 +0000 UTC m=+0.092898973 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec 11 08:56:08 np0005555520 systemd[1]: Reloading.
Dec 11 08:56:08 np0005555520 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 08:56:08 np0005555520 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:56:08 np0005555520 systemd[1]: Starting kepler container...
Dec 11 08:56:09 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:56:09 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.
Dec 11 08:56:09 np0005555520 podman[224161]: 2025-12-11 13:56:09.073968916 +0000 UTC m=+0.131877442 container init 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.buildah.version=1.29.0, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, io.openshift.tags=base rhel9)
Dec 11 08:56:09 np0005555520 kepler[224176]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 11 08:56:09 np0005555520 podman[224161]: 2025-12-11 13:56:09.095636988 +0000 UTC m=+0.153545544 container start 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, name=ubi9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 08:56:09 np0005555520 podman[224161]: kepler
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.101987       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.102159       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.102186       1 config.go:295] kernel version: 5.14
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.102879       1 power.go:78] Unable to obtain power, use estimate method
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.102901       1 redfish.go:169] failed to get redfish credential file path
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.103313       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.103326       1 power.go:79] using none to obtain power
Dec 11 08:56:09 np0005555520 kepler[224176]: E1211 13:56:09.103341       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 11 08:56:09 np0005555520 kepler[224176]: E1211 13:56:09.103369       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 11 08:56:09 np0005555520 kepler[224176]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.105214       1 exporter.go:84] Number of CPUs: 8
Dec 11 08:56:09 np0005555520 systemd[1]: Started kepler container.
Dec 11 08:56:09 np0005555520 podman[224186]: 2025-12-11 13:56:09.161949928 +0000 UTC m=+0.055081695 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-type=git)
Dec 11 08:56:09 np0005555520 systemd[1]: 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a-7a001450601d9557.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:56:09 np0005555520 systemd[1]: 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a-7a001450601d9557.service: Failed with result 'exit-code'.
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.671648       1 watcher.go:83] Using in cluster k8s config
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.671924       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 11 08:56:09 np0005555520 kepler[224176]: E1211 13:56:09.672122       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.677219       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.677387       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.682615       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.682957       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.694969       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.695147       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.695265       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.702605       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.702750       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703019       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703142       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703263       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703375       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703554       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703699       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.703889       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.704014       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.704209       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 11 08:56:09 np0005555520 kepler[224176]: I1211 13:56:09.704731       1 exporter.go:208] Started Kepler in 603.012856ms
Dec 11 08:56:09 np0005555520 python3.9[224365]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:56:10 np0005555520 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 11 08:56:10 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:10.106 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 11 08:56:10 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:10.208 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 11 08:56:10 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:10.209 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 11 08:56:10 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:10.210 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 11 08:56:10 np0005555520 ceilometer_agent_ipmi[222855]: 2025-12-11 13:56:10.220 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 11 08:56:10 np0005555520 systemd[1]: libpod-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.scope: Deactivated successfully.
Dec 11 08:56:10 np0005555520 systemd[1]: libpod-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.scope: Consumed 2.326s CPU time.
Dec 11 08:56:10 np0005555520 podman[224375]: 2025-12-11 13:56:10.495568436 +0000 UTC m=+0.457444961 container died a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Dec 11 08:56:10 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-26df6449b87e4d69.timer: Deactivated successfully.
Dec 11 08:56:10 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.
Dec 11 08:56:10 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-userdata-shm.mount: Deactivated successfully.
Dec 11 08:56:10 np0005555520 systemd[1]: var-lib-containers-storage-overlay-5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e-merged.mount: Deactivated successfully.
Dec 11 08:56:10 np0005555520 podman[224375]: 2025-12-11 13:56:10.576137906 +0000 UTC m=+0.538014411 container cleanup a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 11 08:56:10 np0005555520 podman[224375]: ceilometer_agent_ipmi
Dec 11 08:56:10 np0005555520 podman[224403]: ceilometer_agent_ipmi
Dec 11 08:56:10 np0005555520 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 11 08:56:10 np0005555520 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 11 08:56:10 np0005555520 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 11 08:56:10 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:56:10 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:10 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:10 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:10 np0005555520 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e5928ed8d102d49c0e8ea864e9ac129906c4fac7e7771175173ce8da5056e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 11 08:56:10 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.
Dec 11 08:56:10 np0005555520 podman[224416]: 2025-12-11 13:56:10.926998276 +0000 UTC m=+0.205831788 container init a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:56:10 np0005555520 ceilometer_agent_ipmi[224431]: + sudo -E kolla_set_configs
Dec 11 08:56:10 np0005555520 podman[224416]: 2025-12-11 13:56:10.969529222 +0000 UTC m=+0.248362744 container start a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 08:56:10 np0005555520 podman[224416]: ceilometer_agent_ipmi
Dec 11 08:56:10 np0005555520 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Validating config file
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Copying service configuration files
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: INFO:__main__:Writing out command to execute
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: ++ cat /run_command
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + ARGS=
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + sudo kolla_copy_cacerts
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + [[ ! -n '' ]]
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + . kolla_extend_start
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + umask 0022
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 11 08:56:11 np0005555520 podman[224438]: 2025-12-11 13:56:11.099815323 +0000 UTC m=+0.111451040 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm)
Dec 11 08:56:11 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-efe42ff01eeddf0.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:56:11 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-efe42ff01eeddf0.service: Failed with result 'exit-code'.
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.936 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.937 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.938 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.939 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.940 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.941 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.942 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.943 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.944 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.945 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.946 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.947 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.948 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.949 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.950 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.951 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.952 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.953 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.954 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.979 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.981 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 11 08:56:11 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:11.983 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.010 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpuzyecqsp/privsep.sock']
Dec 11 08:56:12 np0005555520 python3.9[224614]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:56:12 np0005555520 systemd[1]: Stopping kepler container...
Dec 11 08:56:12 np0005555520 kepler[224176]: I1211 13:56:12.273972       1 exporter.go:218] Received shutdown signal
Dec 11 08:56:12 np0005555520 kepler[224176]: I1211 13:56:12.274454       1 exporter.go:226] Exiting...
Dec 11 08:56:12 np0005555520 systemd[1]: libpod-72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.scope: Deactivated successfully.
Dec 11 08:56:12 np0005555520 podman[224625]: 2025-12-11 13:56:12.497487865 +0000 UTC m=+0.269023901 container died 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., version=9.4, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 11 08:56:12 np0005555520 systemd[1]: 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a-7a001450601d9557.timer: Deactivated successfully.
Dec 11 08:56:12 np0005555520 systemd[1]: Stopped /usr/bin/podman healthcheck run 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.
Dec 11 08:56:12 np0005555520 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a-userdata-shm.mount: Deactivated successfully.
Dec 11 08:56:12 np0005555520 systemd[1]: var-lib-containers-storage-overlay-23e1932a2df20cf5a91dd7fd844afae0fc761f1e91286e972dc12981a85b5658-merged.mount: Deactivated successfully.
Dec 11 08:56:12 np0005555520 podman[224625]: 2025-12-11 13:56:12.550754794 +0000 UTC m=+0.322290820 container cleanup 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, architecture=x86_64, config_id=edpm)
Dec 11 08:56:12 np0005555520 podman[224625]: kepler
Dec 11 08:56:12 np0005555520 podman[224652]: kepler
Dec 11 08:56:12 np0005555520 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 11 08:56:12 np0005555520 systemd[1]: Stopped kepler container.
Dec 11 08:56:12 np0005555520 systemd[1]: Starting kepler container...
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.702 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.703 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuzyecqsp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.569 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.574 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.577 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.577 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 11 08:56:12 np0005555520 systemd[1]: Started libcrun container.
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.797 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.798 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.800 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.800 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.800 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.801 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.802 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.802 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.802 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.803 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.803 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.803 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.804 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.807 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.807 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.808 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.808 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.808 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.808 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.809 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.809 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.809 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.809 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.810 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 systemd[1]: Started /usr/bin/podman healthcheck run 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.810 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.810 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.811 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.811 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.811 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.811 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.812 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.812 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.812 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.812 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.813 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.813 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.813 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.813 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.813 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.814 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.814 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.814 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.814 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.814 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.815 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.815 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.815 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.815 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.815 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.816 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.816 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.816 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.816 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.816 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.817 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.817 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.817 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.817 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.817 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.818 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.818 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.818 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.818 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.819 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.819 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.819 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.819 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.819 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.820 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.820 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.820 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.820 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.821 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.821 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.821 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.821 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.821 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.822 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.822 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.822 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.822 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.823 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.823 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.823 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.823 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.823 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.824 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.824 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.824 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.824 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.824 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.825 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.825 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.825 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.825 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.825 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.826 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.826 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.826 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.826 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.827 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.827 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.827 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.827 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.827 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.828 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.828 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.828 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.829 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.829 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.829 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.829 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.830 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.830 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.830 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.830 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.830 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.831 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.831 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.831 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.831 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.832 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.832 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.832 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.832 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.832 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.833 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.833 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.833 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.833 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.833 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.834 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.834 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.834 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.835 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.835 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.835 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.835 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.835 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.836 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.836 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.836 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.836 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.837 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.837 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.837 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.837 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.837 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.838 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.838 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.838 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.838 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.838 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.839 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.839 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.839 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.839 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.840 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.840 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.840 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.840 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.840 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.841 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.841 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.841 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.841 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.841 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.842 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.842 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.842 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.842 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.842 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.843 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.843 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.843 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.843 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.843 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.844 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.844 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.844 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.846 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.846 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.846 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.846 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.847 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.847 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.847 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.847 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.847 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.848 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.848 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.848 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.848 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.848 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.849 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.849 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.849 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.849 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.849 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.850 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.850 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.850 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.850 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.850 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.850 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.851 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.851 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.851 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.851 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 11 08:56:12 np0005555520 ceilometer_agent_ipmi[224431]: 2025-12-11 13:56:12.854 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 11 08:56:12 np0005555520 podman[224665]: 2025-12-11 13:56:12.911124939 +0000 UTC m=+0.253504390 container init 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, name=ubi9, config_id=edpm, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, release=1214.1726694543)
Dec 11 08:56:12 np0005555520 kepler[224681]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.947880       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.948127       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.948151       1 config.go:295] kernel version: 5.14
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.949156       1 power.go:78] Unable to obtain power, use estimate method
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.949195       1 redfish.go:169] failed to get redfish credential file path
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.949637       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.949657       1 power.go:79] using none to obtain power
Dec 11 08:56:12 np0005555520 kepler[224681]: E1211 13:56:12.949675       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 11 08:56:12 np0005555520 kepler[224681]: E1211 13:56:12.949704       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 11 08:56:12 np0005555520 kepler[224681]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 11 08:56:12 np0005555520 kepler[224681]: I1211 13:56:12.951691       1 exporter.go:84] Number of CPUs: 8
Dec 11 08:56:12 np0005555520 podman[224665]: 2025-12-11 13:56:12.953740376 +0000 UTC m=+0.296119797 container start 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 08:56:13 np0005555520 podman[224665]: kepler
Dec 11 08:56:13 np0005555520 systemd[1]: Started kepler container.
Dec 11 08:56:13 np0005555520 podman[224694]: 2025-12-11 13:56:13.123655921 +0000 UTC m=+0.152655162 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release=1214.1726694543, version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Dec 11 08:56:13 np0005555520 systemd[1]: 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a-1bd5f11655e7f39e.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:56:13 np0005555520 systemd[1]: 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a-1bd5f11655e7f39e.service: Failed with result 'exit-code'.
Dec 11 08:56:13 np0005555520 podman[224736]: 2025-12-11 13:56:13.235769696 +0000 UTC m=+0.081115135 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.539066       1 watcher.go:83] Using in cluster k8s config
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.539101       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 11 08:56:13 np0005555520 kepler[224681]: E1211 13:56:13.539148       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.544729       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.544763       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.548536       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.548559       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.559221       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.559256       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.559271       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567320       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567353       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567358       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567363       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567369       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567379       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567453       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567478       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.567495       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.568366       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.568448       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 11 08:56:13 np0005555520 kepler[224681]: I1211 13:56:13.568913       1 exporter.go:208] Started Kepler in 621.372578ms
Dec 11 08:56:13 np0005555520 python3.9[224898]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 08:56:14 np0005555520 podman[224966]: 2025-12-11 13:56:14.532747375 +0000 UTC m=+0.119729714 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.build-date=20251210, tcib_managed=true, io.buildah.version=1.41.4)
Dec 11 08:56:15 np0005555520 python3.9[225070]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 11 08:56:16 np0005555520 python3.9[225234]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:16 np0005555520 systemd[1]: Started libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope.
Dec 11 08:56:16 np0005555520 podman[225235]: 2025-12-11 13:56:16.592082444 +0000 UTC m=+0.128878517 container exec 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 08:56:16 np0005555520 podman[225235]: 2025-12-11 13:56:16.626167803 +0000 UTC m=+0.162963846 container exec_died 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 11 08:56:16 np0005555520 systemd[1]: libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope: Deactivated successfully.
Dec 11 08:56:17 np0005555520 podman[225389]: 2025-12-11 13:56:17.450079287 +0000 UTC m=+0.171684569 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 08:56:17 np0005555520 python3.9[225440]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:17 np0005555520 systemd[1]: Started libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope.
Dec 11 08:56:17 np0005555520 podman[225444]: 2025-12-11 13:56:17.738509633 +0000 UTC m=+0.119462945 container exec 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 11 08:56:17 np0005555520 podman[225444]: 2025-12-11 13:56:17.774085568 +0000 UTC m=+0.155038880 container exec_died 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 11 08:56:17 np0005555520 systemd[1]: libpod-conmon-8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e.scope: Deactivated successfully.
Dec 11 08:56:18 np0005555520 python3.9[225627]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:19 np0005555520 python3.9[225779]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 11 08:56:20 np0005555520 python3.9[225947]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:20 np0005555520 systemd[1]: Started libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope.
Dec 11 08:56:20 np0005555520 podman[225948]: 2025-12-11 13:56:20.948114148 +0000 UTC m=+0.155391040 container exec 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 11 08:56:20 np0005555520 podman[225948]: 2025-12-11 13:56:20.983386984 +0000 UTC m=+0.190663776 container exec_died 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 08:56:21 np0005555520 podman[225965]: 2025-12-11 13:56:21.032925291 +0000 UTC m=+0.099594658 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 08:56:21 np0005555520 systemd[1]: libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope: Deactivated successfully.
Dec 11 08:56:22 np0005555520 python3.9[226151]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:22 np0005555520 systemd[1]: Started libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope.
Dec 11 08:56:22 np0005555520 podman[226152]: 2025-12-11 13:56:22.192159365 +0000 UTC m=+0.143829366 container exec 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 11 08:56:22 np0005555520 podman[226152]: 2025-12-11 13:56:22.230037955 +0000 UTC m=+0.181707956 container exec_died 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 08:56:22 np0005555520 systemd[1]: libpod-conmon-11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca.scope: Deactivated successfully.
Dec 11 08:56:23 np0005555520 python3.9[226334]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:23 np0005555520 podman[226458]: 2025-12-11 13:56:23.970329306 +0000 UTC m=+0.098856069 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 08:56:24 np0005555520 python3.9[226501]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 11 08:56:25 np0005555520 python3.9[226675]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:25 np0005555520 systemd[1]: Started libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope.
Dec 11 08:56:25 np0005555520 podman[226676]: 2025-12-11 13:56:25.448649521 +0000 UTC m=+0.155946213 container exec 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 11 08:56:25 np0005555520 podman[226676]: 2025-12-11 13:56:25.486847229 +0000 UTC m=+0.194143901 container exec_died 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 08:56:25 np0005555520 systemd[1]: libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope: Deactivated successfully.
Dec 11 08:56:26 np0005555520 python3.9[226856]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:26 np0005555520 systemd[1]: Started libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope.
Dec 11 08:56:26 np0005555520 podman[226857]: 2025-12-11 13:56:26.736940416 +0000 UTC m=+0.129787921 container exec 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 11 08:56:26 np0005555520 podman[226857]: 2025-12-11 13:56:26.773886513 +0000 UTC m=+0.166733988 container exec_died 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 08:56:26 np0005555520 systemd[1]: libpod-conmon-4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd.scope: Deactivated successfully.
Dec 11 08:56:27 np0005555520 python3.9[227038]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:28 np0005555520 python3.9[227190]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 11 08:56:29 np0005555520 podman[203650]: time="2025-12-11T13:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:56:29 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28294 "" "Go-http-client/1.1"
Dec 11 08:56:29 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Dec 11 08:56:29 np0005555520 python3.9[227355]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:30 np0005555520 systemd[1]: Started libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope.
Dec 11 08:56:30 np0005555520 podman[227356]: 2025-12-11 13:56:30.096713949 +0000 UTC m=+0.133103281 container exec ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec 11 08:56:30 np0005555520 podman[227356]: 2025-12-11 13:56:30.130099779 +0000 UTC m=+0.166488771 container exec_died ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm)
Dec 11 08:56:30 np0005555520 systemd[1]: libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope: Deactivated successfully.
Dec 11 08:56:31 np0005555520 python3.9[227538]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:31 np0005555520 systemd[1]: Started libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope.
Dec 11 08:56:31 np0005555520 podman[227539]: 2025-12-11 13:56:31.356519637 +0000 UTC m=+0.122974395 container exec ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 08:56:31 np0005555520 podman[227539]: 2025-12-11 13:56:31.39307461 +0000 UTC m=+0.159529338 container exec_died ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true)
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 08:56:31 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:56:31 np0005555520 systemd[1]: libpod-conmon-ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3.scope: Deactivated successfully.
Dec 11 08:56:32 np0005555520 python3.9[227719]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:33 np0005555520 python3.9[227871]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 11 08:56:34 np0005555520 python3.9[228034]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:34 np0005555520 systemd[1]: Started libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope.
Dec 11 08:56:34 np0005555520 podman[228035]: 2025-12-11 13:56:34.882571856 +0000 UTC m=+0.151334716 container exec 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 08:56:34 np0005555520 podman[228035]: 2025-12-11 13:56:34.916579085 +0000 UTC m=+0.185341955 container exec_died 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:56:34 np0005555520 systemd[1]: libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope: Deactivated successfully.
Dec 11 08:56:35 np0005555520 python3.9[228213]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:36 np0005555520 systemd[1]: Started libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope.
Dec 11 08:56:36 np0005555520 podman[228214]: 2025-12-11 13:56:36.127325018 +0000 UTC m=+0.144765134 container exec 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:56:36 np0005555520 podman[228214]: 2025-12-11 13:56:36.162684901 +0000 UTC m=+0.180125047 container exec_died 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 08:56:36 np0005555520 systemd[1]: libpod-conmon-8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be.scope: Deactivated successfully.
Dec 11 08:56:36 np0005555520 podman[228229]: 2025-12-11 13:56:36.26234808 +0000 UTC m=+0.118030354 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:56:37 np0005555520 python3.9[228416]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:38 np0005555520 python3.9[228568]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 11 08:56:38 np0005555520 nova_compute[189440]: 2025-12-11 13:56:38.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:38 np0005555520 nova_compute[189440]: 2025-12-11 13:56:38.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 08:56:38 np0005555520 podman[228705]: 2025-12-11 13:56:38.930820042 +0000 UTC m=+0.098456481 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 11 08:56:39 np0005555520 python3.9[228752]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:39 np0005555520 systemd[1]: Started libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope.
Dec 11 08:56:39 np0005555520 podman[228754]: 2025-12-11 13:56:39.265360459 +0000 UTC m=+0.118378113 container exec 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:56:39 np0005555520 podman[228754]: 2025-12-11 13:56:39.298647821 +0000 UTC m=+0.151665465 container exec_died 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:56:39 np0005555520 systemd[1]: libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope: Deactivated successfully.
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.253 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.254 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.283 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.283 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.284 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.285 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 08:56:40 np0005555520 python3.9[228934]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:40 np0005555520 systemd[1]: Started libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope.
Dec 11 08:56:40 np0005555520 podman[228935]: 2025-12-11 13:56:40.592461664 +0000 UTC m=+0.129337564 container exec 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:56:40 np0005555520 podman[228935]: 2025-12-11 13:56:40.62798508 +0000 UTC m=+0.164860970 container exec_died 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 08:56:40 np0005555520 systemd[1]: libpod-conmon-6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f.scope: Deactivated successfully.
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.740 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.741 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5677MB free_disk=72.43042373657227GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.742 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.742 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.813 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.813 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.835 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.847 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.849 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 08:56:40 np0005555520 nova_compute[189440]: 2025-12-11 13:56:40.849 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:56:41 np0005555520 podman[229087]: 2025-12-11 13:56:41.465168973 +0000 UTC m=+0.071043564 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:56:41 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-efe42ff01eeddf0.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 08:56:41 np0005555520 systemd[1]: a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc-efe42ff01eeddf0.service: Failed with result 'exit-code'.
Dec 11 08:56:41 np0005555520 python3.9[229132]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:41 np0005555520 nova_compute[189440]: 2025-12-11 13:56:41.829 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:42 np0005555520 nova_compute[189440]: 2025-12-11 13:56:42.247 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:42 np0005555520 nova_compute[189440]: 2025-12-11 13:56:42.248 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:42 np0005555520 nova_compute[189440]: 2025-12-11 13:56:42.248 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:42 np0005555520 nova_compute[189440]: 2025-12-11 13:56:42.249 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:42 np0005555520 nova_compute[189440]: 2025-12-11 13:56:42.249 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:56:42 np0005555520 python3.9[229284]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 11 08:56:43 np0005555520 podman[229397]: 2025-12-11 13:56:43.506348823 +0000 UTC m=+0.087276465 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec 11 08:56:43 np0005555520 podman[229396]: 2025-12-11 13:56:43.526866929 +0000 UTC m=+0.113446840 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 11 08:56:43 np0005555520 python3.9[229483]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:43 np0005555520 systemd[1]: Started libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope.
Dec 11 08:56:44 np0005555520 podman[229484]: 2025-12-11 13:56:44.025351573 +0000 UTC m=+0.216054734 container exec 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Dec 11 08:56:44 np0005555520 podman[229484]: 2025-12-11 13:56:44.039303437 +0000 UTC m=+0.230006568 container exec_died 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 11 08:56:44 np0005555520 systemd[1]: libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope: Deactivated successfully.
Dec 11 08:56:44 np0005555520 podman[229613]: 2025-12-11 13:56:44.855581644 +0000 UTC m=+0.149864010 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 08:56:45 np0005555520 python3.9[229684]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:45 np0005555520 systemd[1]: Started libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope.
Dec 11 08:56:45 np0005555520 podman[229685]: 2025-12-11 13:56:45.30798787 +0000 UTC m=+0.157138149 container exec 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 08:56:45 np0005555520 podman[229685]: 2025-12-11 13:56:45.344166663 +0000 UTC m=+0.193316972 container exec_died 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, version=9.6, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec 11 08:56:45 np0005555520 systemd[1]: libpod-conmon-39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73.scope: Deactivated successfully.
Dec 11 08:56:46 np0005555520 python3.9[229865]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:47 np0005555520 python3.9[230017]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 11 08:56:48 np0005555520 podman[230152]: 2025-12-11 13:56:48.28675566 +0000 UTC m=+0.149268645 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 08:56:48 np0005555520 python3.9[230200]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:48 np0005555520 systemd[1]: Started libpod-conmon-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.scope.
Dec 11 08:56:48 np0005555520 podman[230207]: 2025-12-11 13:56:48.59586544 +0000 UTC m=+0.133611659 container exec a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Dec 11 08:56:48 np0005555520 podman[230207]: 2025-12-11 13:56:48.605691002 +0000 UTC m=+0.143437191 container exec_died a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 11 08:56:48 np0005555520 systemd[1]: libpod-conmon-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.scope: Deactivated successfully.
Dec 11 08:56:49 np0005555520 python3.9[230386]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:49 np0005555520 systemd[1]: Started libpod-conmon-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.scope.
Dec 11 08:56:49 np0005555520 podman[230388]: 2025-12-11 13:56:49.7469752 +0000 UTC m=+0.142127778 container exec a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec 11 08:56:49 np0005555520 podman[230388]: 2025-12-11 13:56:49.781428831 +0000 UTC m=+0.176581329 container exec_died a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 08:56:49 np0005555520 systemd[1]: libpod-conmon-a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc.scope: Deactivated successfully.
Dec 11 08:56:50 np0005555520 python3.9[230567]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:51 np0005555520 podman[230667]: 2025-12-11 13:56:51.673442388 +0000 UTC m=+0.122261708 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 08:56:51 np0005555520 python3.9[230740]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 11 08:56:52 np0005555520 python3.9[230904]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:53 np0005555520 systemd[1]: Started libpod-conmon-72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.scope.
Dec 11 08:56:53 np0005555520 podman[230905]: 2025-12-11 13:56:53.08307771 +0000 UTC m=+0.138906789 container exec 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, config_id=edpm, release=1214.1726694543, container_name=kepler, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64)
Dec 11 08:56:53 np0005555520 podman[230905]: 2025-12-11 13:56:53.115864809 +0000 UTC m=+0.171693908 container exec_died 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc.)
Dec 11 08:56:53 np0005555520 systemd[1]: libpod-conmon-72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.scope: Deactivated successfully.
Dec 11 08:56:54 np0005555520 python3.9[231086]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 11 08:56:54 np0005555520 systemd[1]: Started libpod-conmon-72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.scope.
Dec 11 08:56:54 np0005555520 podman[231087]: 2025-12-11 13:56:54.259310671 +0000 UTC m=+0.115291186 container exec 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Dec 11 08:56:54 np0005555520 podman[231087]: 2025-12-11 13:56:54.291636899 +0000 UTC m=+0.147617404 container exec_died 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 11 08:56:54 np0005555520 systemd[1]: libpod-conmon-72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a.scope: Deactivated successfully.
Dec 11 08:56:54 np0005555520 podman[231100]: 2025-12-11 13:56:54.345283683 +0000 UTC m=+0.090559316 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 08:56:55 np0005555520 python3.9[231287]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:56 np0005555520 python3.9[231439]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:57 np0005555520 python3.9[231591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:56:57 np0005555520 python3.9[231714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765461416.4817533-844-118468128366120/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:58 np0005555520 python3.9[231866]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:56:59 np0005555520 podman[203650]: time="2025-12-11T13:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:56:59 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28293 "" "Go-http-client/1.1"
Dec 11 08:56:59 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4276 "" "Go-http-client/1.1"
Dec 11 08:56:59 np0005555520 python3.9[232018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:00 np0005555520 python3.9[232096]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:01 np0005555520 python3.9[232248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 08:57:01 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:57:01 np0005555520 python3.9[232326]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jskxzqyk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:02 np0005555520 python3.9[232480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:03 np0005555520 python3.9[232558]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:57:04.066 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 08:57:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:57:04.067 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 08:57:04 np0005555520 ovn_metadata_agent[106681]: 2025-12-11 13:57:04.067 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 08:57:04 np0005555520 python3.9[232710]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:57:05 np0005555520 python3[232865]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 08:57:06 np0005555520 podman[233017]: 2025-12-11 13:57:06.460573686 +0000 UTC m=+0.072665305 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 08:57:06 np0005555520 python3.9[233018]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:07 np0005555520 python3.9[233119]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:08 np0005555520 python3.9[233271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:08 np0005555520 python3.9[233349]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:09 np0005555520 podman[233449]: 2025-12-11 13:57:09.520623493 +0000 UTC m=+0.115124633 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 11 08:57:09 np0005555520 python3.9[233520]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:10 np0005555520 python3.9[233598]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:11 np0005555520 python3.9[233751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:11 np0005555520 podman[233801]: 2025-12-11 13:57:11.817494252 +0000 UTC m=+0.144568509 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec 11 08:57:11 np0005555520 python3.9[233847]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:12 np0005555520 python3.9[234001]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:13 np0005555520 podman[234126]: 2025-12-11 13:57:13.716453221 +0000 UTC m=+0.088033333 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 11 08:57:13 np0005555520 podman[234127]: 2025-12-11 13:57:13.720732258 +0000 UTC m=+0.084373814 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=)
Dec 11 08:57:13 np0005555520 python3.9[234133]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765461432.222576-969-270383887796101/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:14 np0005555520 python3.9[234317]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:15 np0005555520 podman[234441]: 2025-12-11 13:57:15.438299429 +0000 UTC m=+0.096390550 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 11 08:57:15 np0005555520 python3.9[234486]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:57:16 np0005555520 python3.9[234641]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:17 np0005555520 python3.9[234793]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:57:18 np0005555520 python3.9[234946]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 08:57:18 np0005555520 podman[234947]: 2025-12-11 13:57:18.57224936 +0000 UTC m=+0.161678232 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 11 08:57:19 np0005555520 python3.9[235124]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:57:20 np0005555520 python3.9[235280]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:20 np0005555520 systemd[1]: session-26.scope: Deactivated successfully.
Dec 11 08:57:20 np0005555520 systemd[1]: session-26.scope: Consumed 1min 41.695s CPU time.
Dec 11 08:57:20 np0005555520 systemd-logind[786]: Session 26 logged out. Waiting for processes to exit.
Dec 11 08:57:20 np0005555520 systemd-logind[786]: Removed session 26.
Dec 11 08:57:22 np0005555520 podman[235305]: 2025-12-11 13:57:22.511738651 +0000 UTC m=+0.116531896 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec 11 08:57:24 np0005555520 podman[235327]: 2025-12-11 13:57:24.483122238 +0000 UTC m=+0.084406434 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 08:57:27 np0005555520 systemd-logind[786]: New session 27 of user zuul.
Dec 11 08:57:27 np0005555520 systemd[1]: Started Session 27 of User zuul.
Dec 11 08:57:29 np0005555520 python3.9[235503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:57:29 np0005555520 podman[203650]: time="2025-12-11T13:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 08:57:29 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 08:57:29 np0005555520 podman[203650]: @ - - [11/Dec/2025:13:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4279 "" "Go-http-client/1.1"
Dec 11 08:57:31 np0005555520 python3.9[235659]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: ERROR   13:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 08:57:31 np0005555520 openstack_network_exporter[205834]: 
Dec 11 08:57:32 np0005555520 python3.9[235812]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 08:57:33 np0005555520 python3.9[235896]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 08:57:36 np0005555520 podman[236054]: 2025-12-11 13:57:36.675582798 +0000 UTC m=+0.081419456 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 08:57:36 np0005555520 python3.9[236055]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:37 np0005555520 nova_compute[189440]: 2025-12-11 13:57:37.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:57:37 np0005555520 nova_compute[189440]: 2025-12-11 13:57:37.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 08:57:37 np0005555520 nova_compute[189440]: 2025-12-11 13:57:37.302 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 08:57:37 np0005555520 nova_compute[189440]: 2025-12-11 13:57:37.303 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:57:37 np0005555520 nova_compute[189440]: 2025-12-11 13:57:37.303 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 08:57:37 np0005555520 nova_compute[189440]: 2025-12-11 13:57:37.381 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:57:37 np0005555520 python3.9[236200]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461455.956692-54-188580571853640/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:38 np0005555520 nova_compute[189440]: 2025-12-11 13:57:38.409 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 08:57:38 np0005555520 nova_compute[189440]: 2025-12-11 13:57:38.410 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 08:57:38 np0005555520 python3.9[236352]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:57:39 np0005555520 python3.9[236504]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 08:57:40 np0005555520 podman[236599]: 2025-12-11 13:57:40.054229966 +0000 UTC m=+0.076462076 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 11 08:57:40 np0005555520 python3.9[236646]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765461458.9196844-77-194114221299399/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 13:57:41 compute-0 python3.9[236798]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 13:57:41 compute-0 systemd[1]: Stopping System Logging Service...
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.404 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.405 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.405 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.405 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 13:57:41 compute-0 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec 11 13:57:41 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec 11 13:57:41 compute-0 systemd[1]: Stopped System Logging Service.
Dec 11 13:57:41 compute-0 systemd[1]: rsyslog.service: Consumed 3.381s CPU time, 8.8M memory peak, read 4.0K from disk, written 6.0M to disk.
Dec 11 13:57:41 compute-0 systemd[1]: Starting System Logging Service...
Dec 11 13:57:41 compute-0 rsyslogd[236802]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236802" x-info="https://www.rsyslog.com"] start
Dec 11 13:57:41 compute-0 systemd[1]: Started System Logging Service.
Dec 11 13:57:41 compute-0 rsyslogd[236802]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec 11 13:57:41 compute-0 rsyslogd[236802]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec 11 13:57:41 compute-0 rsyslogd[236802]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 13:57:41 compute-0 rsyslogd[236802]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec 11 13:57:41 compute-0 rsyslogd[236802]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.729 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.730 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5710MB free_disk=72.42497253417969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.731 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.731 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.789 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.789 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 13:57:41 compute-0 nova_compute[189440]: 2025-12-11 13:57:41.819 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 13:57:41 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 11 13:57:41 compute-0 systemd[1]: session-27.scope: Consumed 10.959s CPU time.
Dec 11 13:57:41 compute-0 systemd-logind[786]: Session 27 logged out. Waiting for processes to exit.
Dec 11 13:57:41 compute-0 systemd-logind[786]: Removed session 27.
Dec 11 13:57:42 compute-0 podman[236831]: 2025-12-11 13:57:42.022632557 +0000 UTC m=+0.072920421 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 11 13:57:42 compute-0 nova_compute[189440]: 2025-12-11 13:57:42.355 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 13:57:42 compute-0 nova_compute[189440]: 2025-12-11 13:57:42.357 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 13:57:42 compute-0 nova_compute[189440]: 2025-12-11 13:57:42.357 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.977 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.977 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:57:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.358 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.359 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.359 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.591 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.592 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.592 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.593 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.593 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:43 compute-0 nova_compute[189440]: 2025-12-11 13:57:43.593 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:57:44 compute-0 podman[236852]: 2025-12-11 13:57:44.498382257 +0000 UTC m=+0.086763176 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 11 13:57:44 compute-0 podman[236853]: 2025-12-11 13:57:44.526810507 +0000 UTC m=+0.106737311 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, container_name=kepler, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 11 13:57:46 compute-0 podman[236890]: 2025-12-11 13:57:46.489005456 +0000 UTC m=+0.087822442 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251210)
Dec 11 13:57:49 compute-0 podman[236909]: 2025-12-11 13:57:49.552318853 +0000 UTC m=+0.142289454 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 11 13:57:53 compute-0 podman[236937]: 2025-12-11 13:57:53.516437636 +0000 UTC m=+0.111514536 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec 11 13:57:55 compute-0 podman[236956]: 2025-12-11 13:57:55.550561122 +0000 UTC m=+0.139158587 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 13:57:59 compute-0 podman[203650]: time="2025-12-11T13:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 13:57:59 compute-0 podman[203650]: @ - - [11/Dec/2025:13:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 13:57:59 compute-0 podman[203650]: @ - - [11/Dec/2025:13:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4277 "" "Go-http-client/1.1"
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: ERROR   13:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: ERROR   13:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: ERROR   13:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: ERROR   13:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: ERROR   13:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 13:58:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:58:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 13:58:04.068 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:58:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 13:58:04.068 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:58:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 13:58:04.069 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:58:07 compute-0 podman[236979]: 2025-12-11 13:58:07.509600542 +0000 UTC m=+0.101963525 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 13:58:10 compute-0 podman[237004]: 2025-12-11 13:58:10.531634978 +0000 UTC m=+0.115760401 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 11 13:58:10 compute-0 systemd-logind[786]: New session 28 of user zuul.
Dec 11 13:58:10 compute-0 systemd[1]: Started Session 28 of User zuul.
Dec 11 13:58:12 compute-0 python3[237200]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 13:58:12 compute-0 podman[237223]: 2025-12-11 13:58:12.472641343 +0000 UTC m=+0.075252987 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 11 13:58:14 compute-0 python3[237442]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 13:58:14 compute-0 podman[237474]: 2025-12-11 13:58:14.751361262 +0000 UTC m=+0.078027503 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 11 13:58:14 compute-0 podman[237482]: 2025-12-11 13:58:14.783933243 +0000 UTC m=+0.107304824 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 13:58:15 compute-0 python3[237633]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 13:58:17 compute-0 podman[237759]: 2025-12-11 13:58:17.245593062 +0000 UTC m=+0.084592914 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm)
Dec 11 13:58:17 compute-0 python3[237800]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 13:58:18 compute-0 python3[237954]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 13:58:20 compute-0 podman[238071]: 2025-12-11 13:58:20.603390365 +0000 UTC m=+0.181126356 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 13:58:20 compute-0 python3[238204]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 13:58:22 compute-0 python3[238368]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 13:58:24 compute-0 podman[238407]: 2025-12-11 13:58:24.470074764 +0000 UTC m=+0.071544527 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 13:58:26 compute-0 podman[238427]: 2025-12-11 13:58:26.509366585 +0000 UTC m=+0.105698366 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 13:58:29 compute-0 podman[203650]: time="2025-12-11T13:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 13:58:29 compute-0 podman[203650]: @ - - [11/Dec/2025:13:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 13:58:29 compute-0 podman[203650]: @ - - [11/Dec/2025:13:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: ERROR   13:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: ERROR   13:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: ERROR   13:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: ERROR   13:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: ERROR   13:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 13:58:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:58:38 compute-0 podman[238452]: 2025-12-11 13:58:38.51275565 +0000 UTC m=+0.096730388 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 13:58:39 compute-0 nova_compute[189440]: 2025-12-11 13:58:39.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:39 compute-0 nova_compute[189440]: 2025-12-11 13:58:39.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 13:58:41 compute-0 nova_compute[189440]: 2025-12-11 13:58:41.232 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:41 compute-0 podman[238477]: 2025-12-11 13:58:41.489034117 +0000 UTC m=+0.084267115 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.282 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.282 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.283 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.283 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.642 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.644 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.42570877075195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.644 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.645 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.783 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.784 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.855 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.960 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.960 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 13:58:42 compute-0 nova_compute[189440]: 2025-12-11 13:58:42.980 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 13:58:43 compute-0 nova_compute[189440]: 2025-12-11 13:58:43.002 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 13:58:43 compute-0 nova_compute[189440]: 2025-12-11 13:58:43.025 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 13:58:43 compute-0 nova_compute[189440]: 2025-12-11 13:58:43.042 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 13:58:43 compute-0 nova_compute[189440]: 2025-12-11 13:58:43.044 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 13:58:43 compute-0 nova_compute[189440]: 2025-12-11 13:58:43.044 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:58:43 compute-0 podman[238496]: 2025-12-11 13:58:43.495098399 +0000 UTC m=+0.086514539 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.040 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.040 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.040 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.040 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.057 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.057 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.058 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:44 compute-0 nova_compute[189440]: 2025-12-11 13:58:44.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:45 compute-0 nova_compute[189440]: 2025-12-11 13:58:45.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:58:45 compute-0 podman[238515]: 2025-12-11 13:58:45.47876649 +0000 UTC m=+0.067385918 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543)
Dec 11 13:58:45 compute-0 podman[238514]: 2025-12-11 13:58:45.510721391 +0000 UTC m=+0.109588606 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 13:58:47 compute-0 podman[238550]: 2025-12-11 13:58:47.517201513 +0000 UTC m=+0.108517589 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec 11 13:58:51 compute-0 podman[238570]: 2025-12-11 13:58:51.563220886 +0000 UTC m=+0.153882666 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 13:58:55 compute-0 podman[238594]: 2025-12-11 13:58:55.49743588 +0000 UTC m=+0.082201675 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 11 13:58:57 compute-0 podman[238615]: 2025-12-11 13:58:57.529427838 +0000 UTC m=+0.125365197 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 13:58:59 compute-0 podman[203650]: time="2025-12-11T13:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 13:58:59 compute-0 podman[203650]: @ - - [11/Dec/2025:13:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 13:58:59 compute-0 podman[203650]: @ - - [11/Dec/2025:13:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4275 "" "Go-http-client/1.1"
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: ERROR   13:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: ERROR   13:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: ERROR   13:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: ERROR   13:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: ERROR   13:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 13:59:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:59:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 13:59:04.068 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:59:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 13:59:04.071 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:59:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 13:59:04.071 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:59:09 compute-0 podman[238638]: 2025-12-11 13:59:09.530258201 +0000 UTC m=+0.115186731 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 13:59:12 compute-0 podman[238661]: 2025-12-11 13:59:12.481115219 +0000 UTC m=+0.074859048 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 13:59:14 compute-0 podman[238680]: 2025-12-11 13:59:14.508301261 +0000 UTC m=+0.106885792 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 13:59:16 compute-0 podman[238700]: 2025-12-11 13:59:16.467263846 +0000 UTC m=+0.071692582 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 13:59:16 compute-0 podman[238701]: 2025-12-11 13:59:16.505273303 +0000 UTC m=+0.097363202 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=)
Dec 11 13:59:18 compute-0 podman[238738]: 2025-12-11 13:59:18.540094588 +0000 UTC m=+0.132490898 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Dec 11 13:59:21 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 11 13:59:21 compute-0 systemd[1]: session-28.scope: Consumed 9.342s CPU time.
Dec 11 13:59:21 compute-0 systemd-logind[786]: Session 28 logged out. Waiting for processes to exit.
Dec 11 13:59:21 compute-0 systemd-logind[786]: Removed session 28.
Dec 11 13:59:22 compute-0 podman[238760]: 2025-12-11 13:59:22.535339435 +0000 UTC m=+0.130202683 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 11 13:59:26 compute-0 podman[238786]: 2025-12-11 13:59:26.512603698 +0000 UTC m=+0.107297630 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 13:59:28 compute-0 podman[238807]: 2025-12-11 13:59:28.535468126 +0000 UTC m=+0.126168087 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 13:59:29 compute-0 podman[203650]: time="2025-12-11T13:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 13:59:29 compute-0 podman[203650]: @ - - [11/Dec/2025:13:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 13:59:29 compute-0 podman[203650]: @ - - [11/Dec/2025:13:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4287 "" "Go-http-client/1.1"
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: ERROR   13:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: ERROR   13:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: ERROR   13:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: ERROR   13:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: ERROR   13:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 13:59:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 13:59:40 compute-0 podman[238829]: 2025-12-11 13:59:40.505145648 +0000 UTC m=+0.093679573 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 13:59:41 compute-0 nova_compute[189440]: 2025-12-11 13:59:41.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:41 compute-0 nova_compute[189440]: 2025-12-11 13:59:41.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 13:59:42 compute-0 nova_compute[189440]: 2025-12-11 13:59:42.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.978 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.979 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 13:59:42.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 13:59:43 compute-0 nova_compute[189440]: 2025-12-11 13:59:43.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:43 compute-0 nova_compute[189440]: 2025-12-11 13:59:43.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:43 compute-0 nova_compute[189440]: 2025-12-11 13:59:43.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:43 compute-0 podman[238853]: 2025-12-11 13:59:43.506998606 +0000 UTC m=+0.102090945 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.677 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.677 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.713 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.713 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.713 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:59:44 compute-0 nova_compute[189440]: 2025-12-11 13:59:44.713 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 13:59:44 compute-0 podman[238871]: 2025-12-11 13:59:44.749419425 +0000 UTC m=+0.080182746 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.013 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.015 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5713MB free_disk=72.42568969726562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.015 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.016 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.093 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.093 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.131 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.147 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.150 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 13:59:45 compute-0 nova_compute[189440]: 2025-12-11 13:59:45.151 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 13:59:46 compute-0 nova_compute[189440]: 2025-12-11 13:59:46.709 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:46 compute-0 nova_compute[189440]: 2025-12-11 13:59:46.710 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 13:59:47 compute-0 podman[238889]: 2025-12-11 13:59:47.524358781 +0000 UTC m=+0.115656957 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 11 13:59:47 compute-0 podman[238890]: 2025-12-11 13:59:47.534956073 +0000 UTC m=+0.132039237 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, name=ubi9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 11 13:59:49 compute-0 podman[238926]: 2025-12-11 13:59:49.500021647 +0000 UTC m=+0.100399844 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec 11 13:59:53 compute-0 podman[238945]: 2025-12-11 13:59:53.571265908 +0000 UTC m=+0.164319402 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true)
Dec 11 13:59:57 compute-0 podman[238972]: 2025-12-11 13:59:57.488315318 +0000 UTC m=+0.084613750 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Dec 11 13:59:59 compute-0 podman[238991]: 2025-12-11 13:59:59.504141177 +0000 UTC m=+0.098928089 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 13:59:59 compute-0 podman[203650]: time="2025-12-11T13:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 13:59:59 compute-0 podman[203650]: @ - - [11/Dec/2025:13:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 13:59:59 compute-0 podman[203650]: @ - - [11/Dec/2025:13:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4275 "" "Go-http-client/1.1"
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: ERROR   14:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: ERROR   14:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: ERROR   14:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: ERROR   14:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: ERROR   14:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:00:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:00:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:00:04.070 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:00:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:00:04.071 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:00:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:00:04.071 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:00:11 compute-0 podman[239015]: 2025-12-11 14:00:11.495271214 +0000 UTC m=+0.090151311 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:00:14 compute-0 podman[239038]: 2025-12-11 14:00:14.464762925 +0000 UTC m=+0.066182482 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:00:15 compute-0 podman[239057]: 2025-12-11 14:00:15.491081083 +0000 UTC m=+0.088590895 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:00:18 compute-0 podman[239075]: 2025-12-11 14:00:18.521343158 +0000 UTC m=+0.118912004 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 11 14:00:18 compute-0 podman[239076]: 2025-12-11 14:00:18.553989042 +0000 UTC m=+0.137947265 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:00:20 compute-0 podman[239111]: 2025-12-11 14:00:20.493368117 +0000 UTC m=+0.088862640 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec 11 14:00:24 compute-0 podman[239131]: 2025-12-11 14:00:24.536863868 +0000 UTC m=+0.142456914 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 11 14:00:28 compute-0 podman[239156]: 2025-12-11 14:00:28.551536954 +0000 UTC m=+0.136153043 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec 11 14:00:29 compute-0 podman[203650]: time="2025-12-11T14:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:00:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:00:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4284 "" "Go-http-client/1.1"
Dec 11 14:00:30 compute-0 podman[239177]: 2025-12-11 14:00:30.515051022 +0000 UTC m=+0.100974258 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: ERROR   14:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: ERROR   14:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: ERROR   14:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: ERROR   14:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: ERROR   14:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:00:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:00:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:00:36.184 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:00:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:00:36.186 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:00:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:00:36.188 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:00:41 compute-0 nova_compute[189440]: 2025-12-11 14:00:41.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:41 compute-0 nova_compute[189440]: 2025-12-11 14:00:41.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:00:42 compute-0 nova_compute[189440]: 2025-12-11 14:00:42.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:42 compute-0 podman[239201]: 2025-12-11 14:00:42.507248043 +0000 UTC m=+0.096683316 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:00:43 compute-0 nova_compute[189440]: 2025-12-11 14:00:43.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:44 compute-0 podman[239224]: 2025-12-11 14:00:44.847031095 +0000 UTC m=+0.146061039 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.252 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.252 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.252 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.252 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.286 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.287 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.287 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.288 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.699 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.700 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5713MB free_disk=72.42570877075195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.700 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.701 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.765 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.765 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.789 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.815 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.817 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:00:45 compute-0 nova_compute[189440]: 2025-12-11 14:00:45.817 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:00:46 compute-0 podman[239243]: 2025-12-11 14:00:46.472282992 +0000 UTC m=+0.074610842 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 14:00:48 compute-0 nova_compute[189440]: 2025-12-11 14:00:48.800 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:48 compute-0 nova_compute[189440]: 2025-12-11 14:00:48.800 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:00:49 compute-0 podman[239261]: 2025-12-11 14:00:49.488522714 +0000 UTC m=+0.084758884 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 11 14:00:49 compute-0 podman[239262]: 2025-12-11 14:00:49.492067457 +0000 UTC m=+0.088753228 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, version=9.4)
Dec 11 14:00:51 compute-0 podman[239299]: 2025-12-11 14:00:51.473135762 +0000 UTC m=+0.077267564 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 11 14:00:55 compute-0 podman[239322]: 2025-12-11 14:00:55.541844972 +0000 UTC m=+0.134468112 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:00:59 compute-0 podman[239348]: 2025-12-11 14:00:59.486301543 +0000 UTC m=+0.085572303 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 11 14:00:59 compute-0 podman[203650]: time="2025-12-11T14:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:00:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:00:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4291 "" "Go-http-client/1.1"
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: ERROR   14:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: ERROR   14:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: ERROR   14:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: ERROR   14:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: ERROR   14:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:01:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:01:01 compute-0 podman[239378]: 2025-12-11 14:01:01.475149913 +0000 UTC m=+0.072751139 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:01:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:04.073 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:04.074 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:04.074 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:13 compute-0 podman[239401]: 2025-12-11 14:01:13.465833336 +0000 UTC m=+0.066685384 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:01:15 compute-0 podman[239426]: 2025-12-11 14:01:15.516122124 +0000 UTC m=+0.098894428 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:01:17 compute-0 podman[239444]: 2025-12-11 14:01:17.481091847 +0000 UTC m=+0.079263543 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 11 14:01:20 compute-0 podman[239465]: 2025-12-11 14:01:20.505078812 +0000 UTC m=+0.097102616 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 11 14:01:20 compute-0 podman[239466]: 2025-12-11 14:01:20.506065566 +0000 UTC m=+0.104026411 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 14:01:22 compute-0 podman[239501]: 2025-12-11 14:01:22.543965851 +0000 UTC m=+0.133673365 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Dec 11 14:01:26 compute-0 podman[239520]: 2025-12-11 14:01:26.555720118 +0000 UTC m=+0.154592821 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:01:29 compute-0 podman[203650]: time="2025-12-11T14:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:01:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:01:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4288 "" "Go-http-client/1.1"
Dec 11 14:01:30 compute-0 podman[239545]: 2025-12-11 14:01:30.50025688 +0000 UTC m=+0.094299319 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64)
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: ERROR   14:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: ERROR   14:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: ERROR   14:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: ERROR   14:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: ERROR   14:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:01:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:01:32 compute-0 podman[239565]: 2025-12-11 14:01:32.508962212 +0000 UTC m=+0.096641625 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:01:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:36.798 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:01:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:36.799 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:01:41 compute-0 nova_compute[189440]: 2025-12-11 14:01:41.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:41 compute-0 nova_compute[189440]: 2025-12-11 14:01:41.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.979 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.980 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:01:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:01:43 compute-0 nova_compute[189440]: 2025-12-11 14:01:43.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:44 compute-0 podman[239590]: 2025-12-11 14:01:44.490171228 +0000 UTC m=+0.087608652 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.266 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.266 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.267 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.267 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.638 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.639 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5707MB free_disk=72.42578125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.640 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.640 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:45 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:45.800 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.861 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.861 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.890 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.904 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.905 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:01:45 compute-0 nova_compute[189440]: 2025-12-11 14:01:45.906 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:46 compute-0 podman[239612]: 2025-12-11 14:01:46.472297353 +0000 UTC m=+0.077102205 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 14:01:46 compute-0 nova_compute[189440]: 2025-12-11 14:01:46.901 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:46 compute-0 nova_compute[189440]: 2025-12-11 14:01:46.902 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:46 compute-0 nova_compute[189440]: 2025-12-11 14:01:46.902 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:01:46 compute-0 nova_compute[189440]: 2025-12-11 14:01:46.902 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:01:46 compute-0 nova_compute[189440]: 2025-12-11 14:01:46.922 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:01:46 compute-0 nova_compute[189440]: 2025-12-11 14:01:46.922 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:47 compute-0 nova_compute[189440]: 2025-12-11 14:01:47.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:48 compute-0 podman[239631]: 2025-12-11 14:01:48.472302034 +0000 UTC m=+0.072710357 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Dec 11 14:01:49 compute-0 nova_compute[189440]: 2025-12-11 14:01:49.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:49 compute-0 nova_compute[189440]: 2025-12-11 14:01:49.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.066 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.067 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.084 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.202 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.203 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.214 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.214 189444 INFO nova.compute.claims [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.320 189444 DEBUG nova.compute.provider_tree [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.338 189444 DEBUG nova.scheduler.client.report [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.357 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.358 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.398 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.399 189444 DEBUG nova.network.neutron [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.424 189444 INFO nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.456 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.543 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.544 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.545 189444 INFO nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Creating image(s)#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.546 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.546 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.547 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.548 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.549 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.983 189444 WARNING oslo_policy.policy [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec 11 14:01:50 compute-0 nova_compute[189440]: 2025-12-11 14:01:50.984 189444 WARNING oslo_policy.policy [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec 11 14:01:51 compute-0 podman[239651]: 2025-12-11 14:01:51.471909072 +0000 UTC m=+0.075962936 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:01:51 compute-0 podman[239652]: 2025-12-11 14:01:51.472461156 +0000 UTC m=+0.071135549 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.240 189444 DEBUG nova.network.neutron [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Successfully created port: e82f4978-3a5a-4e23-8c30-c60478cd656f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.331 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.389 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.part --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.390 189444 DEBUG nova.virt.images [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] 714a3758-ec97-4149-8cfb-208787ab3704 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.391 189444 DEBUG nova.privsep.utils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.391 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.part /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.641 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.part /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.converted" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.645 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.701 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031.converted --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.702 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:52 compute-0 nova_compute[189440]: 2025-12-11 14:01:52.716 189444 INFO oslo.privsep.daemon [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp08pmumca/privsep.sock']#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.388 189444 DEBUG nova.network.neutron [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Successfully updated port: e82f4978-3a5a-4e23-8c30-c60478cd656f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.414 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.415 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.415 189444 DEBUG nova.network.neutron [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.457 189444 INFO oslo.privsep.daemon [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.331 239706 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.336 239706 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.339 239706 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.339 239706 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239706#033[00m
Dec 11 14:01:53 compute-0 podman[239707]: 2025-12-11 14:01:53.504211673 +0000 UTC m=+0.098755343 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.550 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.609 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.611 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.612 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.630 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.646 189444 DEBUG nova.network.neutron [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.688 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.689 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.729 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.730 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.731 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.789 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.790 189444 DEBUG nova.virt.disk.api [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Checking if we can resize image /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.791 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.851 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.853 189444 DEBUG nova.virt.disk.api [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Cannot resize image /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.853 189444 DEBUG nova.objects.instance [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'migration_context' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.873 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.874 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.875 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.875 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.876 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.877 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.901 189444 DEBUG nova.compute.manager [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-changed-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.902 189444 DEBUG nova.compute.manager [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Refreshing instance network info cache due to event network-changed-e82f4978-3a5a-4e23-8c30-c60478cd656f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.902 189444 DEBUG oslo_concurrency.lockutils [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.908 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.909 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.958 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.960 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:53 compute-0 nova_compute[189440]: 2025-12-11 14:01:53.976 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.031 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.033 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.033 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.044 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.136 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.137 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.188 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.190 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.190 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.261 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.262 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.263 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Ensure instance console log exists: /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.264 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.264 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.265 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.347 189444 DEBUG nova.network.neutron [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.375 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.376 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Instance network_info: |[{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.376 189444 DEBUG oslo_concurrency.lockutils [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.377 189444 DEBUG nova.network.neutron [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Refreshing network info cache for port e82f4978-3a5a-4e23-8c30-c60478cd656f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.380 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Start _get_guest_xml network_info=[{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '714a3758-ec97-4149-8cfb-208787ab3704'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'size': 1, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.389 189444 WARNING nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.398 189444 DEBUG nova.virt.libvirt.host [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.399 189444 DEBUG nova.virt.libvirt.host [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.403 189444 DEBUG nova.virt.libvirt.host [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.404 189444 DEBUG nova.virt.libvirt.host [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.405 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.406 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:00:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='1d6c0fe6-4c75-4860-b5c4-bc55bee577e2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.406 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.406 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.407 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.407 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.407 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.408 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.408 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.408 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.409 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.409 189444 DEBUG nova.virt.hardware [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.412 189444 DEBUG nova.privsep.utils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.413 189444 DEBUG nova.virt.libvirt.vif [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:01:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-o1lpin9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:01:50Z,user_data=None,user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=82437023-b24d-48bf-af1c-d1957df4da67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.414 189444 DEBUG nova.network.os_vif_util [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.415 189444 DEBUG nova.network.os_vif_util [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.417 189444 DEBUG nova.objects.instance [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.431 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <uuid>82437023-b24d-48bf-af1c-d1957df4da67</uuid>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <name>instance-00000001</name>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <memory>524288</memory>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:name>test_0</nova:name>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:01:54</nova:creationTime>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:flavor name="m1.small">
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:memory>512</nova:memory>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:ephemeral>1</nova:ephemeral>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:user uuid="26c7a9a5c1c0404bb144cd3cba8ecf9f">admin</nova:user>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:project uuid="9c30b62d3d094e1e8b410a2af9fd7d98">admin</nova:project>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="714a3758-ec97-4149-8cfb-208787ab3704"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        <nova:port uuid="e82f4978-3a5a-4e23-8c30-c60478cd656f">
Dec 11 14:01:54 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="192.168.0.20" ipVersion="4"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <system>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <entry name="serial">82437023-b24d-48bf-af1c-d1957df4da67</entry>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <entry name="uuid">82437023-b24d-48bf-af1c-d1957df4da67</entry>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </system>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <os>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </os>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <features>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </features>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <target dev="vdb" bus="virtio"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.config"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:4a:ac:fb"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <target dev="tape82f4978-3a"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/console.log" append="off"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <video>
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </video>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:01:54 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:01:54 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:01:54 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:01:54 compute-0 nova_compute[189440]: </domain>
Dec 11 14:01:54 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.433 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Preparing to wait for external event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.434 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.434 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.435 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.436 189444 DEBUG nova.virt.libvirt.vif [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:01:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-o1lpin9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:01:50Z,user_data=None,user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=82437023-b24d-48bf-af1c-d1957df4da67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.436 189444 DEBUG nova.network.os_vif_util [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.437 189444 DEBUG nova.network.os_vif_util [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.437 189444 DEBUG os_vif [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.475 189444 DEBUG ovsdbapp.backend.ovs_idl [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.475 189444 DEBUG ovsdbapp.backend.ovs_idl [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.476 189444 DEBUG ovsdbapp.backend.ovs_idl [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.476 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.477 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.477 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.478 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.479 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.481 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.491 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.492 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.492 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:01:54 compute-0 nova_compute[189440]: 2025-12-11 14:01:54.493 189444 INFO oslo.privsep.daemon [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpoxtqhvfp/privsep.sock']#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.164 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.255 189444 INFO oslo.privsep.daemon [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.086 239762 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.090 239762 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.092 239762 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.092 239762 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239762#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.585 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.586 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape82f4978-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.587 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape82f4978-3a, col_values=(('external_ids', {'iface-id': 'e82f4978-3a5a-4e23-8c30-c60478cd656f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:ac:fb', 'vm-uuid': '82437023-b24d-48bf-af1c-d1957df4da67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.589 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.591 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:01:55 compute-0 NetworkManager[56353]: <info>  [1765461715.5924] manager: (tape82f4978-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.602 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.604 189444 INFO os_vif [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a')#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.857 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.858 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.858 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.859 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No VIF found with MAC fa:16:3e:4a:ac:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:01:55 compute-0 nova_compute[189440]: 2025-12-11 14:01:55.859 189444 INFO nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Using config drive#033[00m
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.395 189444 INFO nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Creating config drive at /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.config#033[00m
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.400 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmput82goen execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.510 189444 DEBUG nova.network.neutron [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated VIF entry in instance network info cache for port e82f4978-3a5a-4e23-8c30-c60478cd656f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.511 189444 DEBUG nova.network.neutron [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.529 189444 DEBUG oslo_concurrency.lockutils [req-a47f625a-b5e7-4527-8d6e-61d69994f93d req-0a40d738-311e-4127-a47d-d83d2c1a4128 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.538 189444 DEBUG oslo_concurrency.processutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmput82goen" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:01:56 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 11 14:01:56 compute-0 NetworkManager[56353]: <info>  [1765461716.6383] manager: (tape82f4978-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec 11 14:01:56 compute-0 kernel: tape82f4978-3a: entered promiscuous mode
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.652 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:56 compute-0 ovn_controller[97832]: 2025-12-11T14:01:56Z|00027|binding|INFO|Claiming lport e82f4978-3a5a-4e23-8c30-c60478cd656f for this chassis.
Dec 11 14:01:56 compute-0 ovn_controller[97832]: 2025-12-11T14:01:56Z|00028|binding|INFO|e82f4978-3a5a-4e23-8c30-c60478cd656f: Claiming fa:16:3e:4a:ac:fb 192.168.0.20
Dec 11 14:01:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:56.657 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ac:fb 192.168.0.20'], port_security=['fa:16:3e:4a:ac:fb 192.168.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.20/24', 'neutron:device_id': '82437023-b24d-48bf-af1c-d1957df4da67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=e82f4978-3a5a-4e23-8c30-c60478cd656f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:01:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:56.658 106686 INFO neutron.agent.ovn.metadata.agent [-] Port e82f4978-3a5a-4e23-8c30-c60478cd656f in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d bound to our chassis#033[00m
Dec 11 14:01:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:56.661 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:01:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:56.662 106686 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmprhhny4o7/privsep.sock']#033[00m
Dec 11 14:01:56 compute-0 systemd-udevd[239806]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:01:56 compute-0 NetworkManager[56353]: <info>  [1765461716.6956] device (tape82f4978-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:01:56 compute-0 NetworkManager[56353]: <info>  [1765461716.6962] device (tape82f4978-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:01:56 compute-0 systemd-machined[155778]: New machine qemu-1-instance-00000001.
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.736 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:56 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 11 14:01:56 compute-0 ovn_controller[97832]: 2025-12-11T14:01:56Z|00029|binding|INFO|Setting lport e82f4978-3a5a-4e23-8c30-c60478cd656f ovn-installed in OVS
Dec 11 14:01:56 compute-0 ovn_controller[97832]: 2025-12-11T14:01:56Z|00030|binding|INFO|Setting lport e82f4978-3a5a-4e23-8c30-c60478cd656f up in Southbound
Dec 11 14:01:56 compute-0 nova_compute[189440]: 2025-12-11 14:01:56.746 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:01:56 compute-0 podman[239778]: 2025-12-11 14:01:56.762115971 +0000 UTC m=+0.155146941 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.370 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765461717.3697672, 82437023-b24d-48bf-af1c-d1957df4da67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.370 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] VM Started (Lifecycle Event)#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.395 106686 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.396 106686 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprhhny4o7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.243 239832 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.247 239832 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.249 239832 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.249 239832 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239832#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.399 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7a267af0-db89-4145-9ad6-4a315b9c0e1d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.493 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.499 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765461717.3699467, 82437023-b24d-48bf-af1c-d1957df4da67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.499 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.613 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.621 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:01:57 compute-0 nova_compute[189440]: 2025-12-11 14:01:57.654 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.946 239832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.946 239832 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:57.946 239832 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.305 189444 DEBUG nova.compute.manager [req-ac8308bf-dd03-4a03-adf5-9e38af095269 req-3f360e78-1785-45c6-864d-5642eba50e12 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.305 189444 DEBUG oslo_concurrency.lockutils [req-ac8308bf-dd03-4a03-adf5-9e38af095269 req-3f360e78-1785-45c6-864d-5642eba50e12 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.306 189444 DEBUG oslo_concurrency.lockutils [req-ac8308bf-dd03-4a03-adf5-9e38af095269 req-3f360e78-1785-45c6-864d-5642eba50e12 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.306 189444 DEBUG oslo_concurrency.lockutils [req-ac8308bf-dd03-4a03-adf5-9e38af095269 req-3f360e78-1785-45c6-864d-5642eba50e12 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.307 189444 DEBUG nova.compute.manager [req-ac8308bf-dd03-4a03-adf5-9e38af095269 req-3f360e78-1785-45c6-864d-5642eba50e12 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Processing event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.307 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.321 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.322 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765461718.320704, 82437023-b24d-48bf-af1c-d1957df4da67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.322 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.328 189444 INFO nova.virt.libvirt.driver [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Instance spawned successfully.#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.329 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.344 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.353 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.358 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.359 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.359 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.360 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.361 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.362 189444 DEBUG nova.virt.libvirt.driver [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.388 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.447 189444 INFO nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Took 7.90 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.448 189444 DEBUG nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.511 189444 INFO nova.compute.manager [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Took 8.33 seconds to build instance.#033[00m
Dec 11 14:01:58 compute-0 nova_compute[189440]: 2025-12-11 14:01:58.527 189444 DEBUG oslo_concurrency.lockutils [None req-2701d68e-c3fe-4330-a6d6-638377dcdf66 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.542 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[2102fa65-2775-49e5-b721-1b05d5068e25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.544 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62eb1d54-31 in ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.547 239832 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62eb1d54-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.547 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbbb9d2-dc60-479b-b706-05d158e6c825]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.551 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[700f40c0-5136-4373-9c61-233eff5a3b02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.581 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[bce8990d-2e2e-4525-95b1-bedd8ec5f510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:58 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.605 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e0081e-acce-480c-bcd2-964df3cfd190]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:58 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:58.608 106686 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpl1_30jfl/privsep.sock']#033[00m
Dec 11 14:01:58 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.433 106686 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.436 106686 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpl1_30jfl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.246 239872 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.252 239872 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.254 239872 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.254 239872 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239872#033[00m
Dec 11 14:01:59 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:01:59.440 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[65dd56d2-c7a3-4280-847b-9f7353f83660]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:01:59 compute-0 podman[203650]: time="2025-12-11T14:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:01:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:01:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4295 "" "Go-http-client/1.1"
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.087 239872 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.087 239872 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.087 239872 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.168 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.462 189444 DEBUG nova.compute.manager [req-b5c22847-bd7c-4478-9ad8-d850ec041b9d req-53d41da9-df81-41dc-99c5-4caaa7edae2a a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.463 189444 DEBUG oslo_concurrency.lockutils [req-b5c22847-bd7c-4478-9ad8-d850ec041b9d req-53d41da9-df81-41dc-99c5-4caaa7edae2a a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.463 189444 DEBUG oslo_concurrency.lockutils [req-b5c22847-bd7c-4478-9ad8-d850ec041b9d req-53d41da9-df81-41dc-99c5-4caaa7edae2a a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.463 189444 DEBUG oslo_concurrency.lockutils [req-b5c22847-bd7c-4478-9ad8-d850ec041b9d req-53d41da9-df81-41dc-99c5-4caaa7edae2a a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.464 189444 DEBUG nova.compute.manager [req-b5c22847-bd7c-4478-9ad8-d850ec041b9d req-53d41da9-df81-41dc-99c5-4caaa7edae2a a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] No waiting events found dispatching network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.464 189444 WARNING nova.compute.manager [req-b5c22847-bd7c-4478-9ad8-d850ec041b9d req-53d41da9-df81-41dc-99c5-4caaa7edae2a a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received unexpected event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f for instance with vm_state active and task_state None.#033[00m
Dec 11 14:02:00 compute-0 nova_compute[189440]: 2025-12-11 14:02:00.588 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.739 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[3531b38a-db96-45f7-aa8f-e2d1baae7581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.769 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a76df87c-b8a2-4ce8-bcd6-1f8a9ff9c8cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 NetworkManager[56353]: <info>  [1765461720.7752] manager: (tap62eb1d54-30): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.805 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[a56ea1ba-e74a-453e-a13b-78ecf1cf4857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.808 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0ed5df-6e4a-4ba5-a244-2aeb46c0e5a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 systemd-udevd[239891]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:02:00 compute-0 NetworkManager[56353]: <info>  [1765461720.8399] device (tap62eb1d54-30): carrier: link connected
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.845 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9f9aff-4a58-4211-b235-f1e3a85b6afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.872 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[c490b81b-5983-4c4f-a0c7-d51a912acd2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 33901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239913, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.892 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[220f67ad-5d85-43d2-86e1-7710d0d07e9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:cc24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378116, 'tstamp': 378116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239920, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 podman[239881]: 2025-12-11 14:02:00.897693596 +0000 UTC m=+0.093990537 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc.)
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.910 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec1e3db-63f3-4f25-af42-f2ad8b902f69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 33901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239923, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.938 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f65e8e38-dff8-4906-ba3b-a3b811ff67c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.996 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[6766af13-8c0b-4392-b7cb-93134cc9fd87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:01 compute-0 kernel: tap62eb1d54-30: entered promiscuous mode
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.998 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.998 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:00.999 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:02:01 compute-0 nova_compute[189440]: 2025-12-11 14:02:01.003 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:01 compute-0 NetworkManager[56353]: <info>  [1765461721.0041] manager: (tap62eb1d54-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:01.010 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:02:01 compute-0 nova_compute[189440]: 2025-12-11 14:02:01.011 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:01 compute-0 ovn_controller[97832]: 2025-12-11T14:02:01Z|00031|binding|INFO|Releasing lport dd9a733c-26da-4e0b-928d-1f82d21083bb from this chassis (sb_readonly=0)
Dec 11 14:02:01 compute-0 nova_compute[189440]: 2025-12-11 14:02:01.023 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:01 compute-0 nova_compute[189440]: 2025-12-11 14:02:01.024 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:01.025 106686 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62eb1d54-32e6-4ea5-8151-f2c97214c84d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62eb1d54-32e6-4ea5-8151-f2c97214c84d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:01.026 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[07e00c1d-03ec-4b31-adfe-d6a5c408ee29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:01.028 106686 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: global
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    log         /dev/log local0 debug
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    log-tag     haproxy-metadata-proxy-62eb1d54-32e6-4ea5-8151-f2c97214c84d
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    user        root
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    group       root
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    maxconn     1024
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    pidfile     /var/lib/neutron/external/pids/62eb1d54-32e6-4ea5-8151-f2c97214c84d.pid.haproxy
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    daemon
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: defaults
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    log global
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    mode http
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    option httplog
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    option dontlognull
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    option http-server-close
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    option forwardfor
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    retries                 3
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    timeout http-request    30s
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    timeout connect         30s
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    timeout client          32s
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    timeout server          32s
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    timeout http-keep-alive 30s
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: listen listener
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    bind 169.254.169.254:80
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    server metadata /var/lib/neutron/metadata_proxy
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]:    http-request add-header X-OVN-Network-ID 62eb1d54-32e6-4ea5-8151-f2c97214c84d
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 11 14:02:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:01.031 106686 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'env', 'PROCESS_TAG=haproxy-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62eb1d54-32e6-4ea5-8151-f2c97214c84d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: ERROR   14:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: ERROR   14:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: ERROR   14:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: ERROR   14:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: ERROR   14:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:02:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:02:01 compute-0 podman[239954]: 2025-12-11 14:02:01.494832912 +0000 UTC m=+0.095836952 container create c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 11 14:02:01 compute-0 systemd[1]: Started libpod-conmon-c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda.scope.
Dec 11 14:02:01 compute-0 podman[239954]: 2025-12-11 14:02:01.451683438 +0000 UTC m=+0.052687488 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 14:02:01 compute-0 systemd[1]: Started libcrun container.
Dec 11 14:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1a1cb75262f298eeb24e9112e6eb20d6013c5279ecc8fc3521423bd6fa0484/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 14:02:01 compute-0 podman[239954]: 2025-12-11 14:02:01.608489568 +0000 UTC m=+0.209493638 container init c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 11 14:02:01 compute-0 podman[239954]: 2025-12-11 14:02:01.61634523 +0000 UTC m=+0.217349270 container start c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 11 14:02:01 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [NOTICE]   (239972) : New worker (239974) forked
Dec 11 14:02:01 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [NOTICE]   (239972) : Loading success.
Dec 11 14:02:03 compute-0 podman[239983]: 2025-12-11 14:02:03.504184942 +0000 UTC m=+0.094844068 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:02:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:04.075 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:02:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:04.075 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:02:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:02:04.076 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:02:05 compute-0 nova_compute[189440]: 2025-12-11 14:02:05.172 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:05 compute-0 nova_compute[189440]: 2025-12-11 14:02:05.591 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:10 compute-0 nova_compute[189440]: 2025-12-11 14:02:10.176 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:10 compute-0 nova_compute[189440]: 2025-12-11 14:02:10.594 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:14 compute-0 podman[240006]: 2025-12-11 14:02:14.779724527 +0000 UTC m=+0.101788347 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.179 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:15 compute-0 ovn_controller[97832]: 2025-12-11T14:02:15Z|00032|binding|INFO|Releasing lport dd9a733c-26da-4e0b-928d-1f82d21083bb from this chassis (sb_readonly=0)
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.3948] manager: (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.3953] device (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <warn>  [1765461735.3956] device (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.393 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.3962] manager: (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.3965] device (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <warn>  [1765461735.3966] device (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.3976] manager: (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.3983] manager: (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.4137] device (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 11 14:02:15 compute-0 NetworkManager[56353]: <info>  [1765461735.4142] device (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 11 14:02:15 compute-0 ovn_controller[97832]: 2025-12-11T14:02:15Z|00033|binding|INFO|Releasing lport dd9a733c-26da-4e0b-928d-1f82d21083bb from this chassis (sb_readonly=0)
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.436 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.442 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.597 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.671 189444 DEBUG nova.compute.manager [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-changed-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.672 189444 DEBUG nova.compute.manager [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Refreshing instance network info cache due to event network-changed-e82f4978-3a5a-4e23-8c30-c60478cd656f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.672 189444 DEBUG oslo_concurrency.lockutils [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.673 189444 DEBUG oslo_concurrency.lockutils [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:02:15 compute-0 nova_compute[189440]: 2025-12-11 14:02:15.673 189444 DEBUG nova.network.neutron [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Refreshing network info cache for port e82f4978-3a5a-4e23-8c30-c60478cd656f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:02:16 compute-0 nova_compute[189440]: 2025-12-11 14:02:16.861 189444 DEBUG nova.network.neutron [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated VIF entry in instance network info cache for port e82f4978-3a5a-4e23-8c30-c60478cd656f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:02:16 compute-0 nova_compute[189440]: 2025-12-11 14:02:16.862 189444 DEBUG nova.network.neutron [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:02:16 compute-0 nova_compute[189440]: 2025-12-11 14:02:16.883 189444 DEBUG oslo_concurrency.lockutils [req-1e5b5796-cfbc-47ac-a380-3ed517023db6 req-1c332e83-cef0-4df5-8a32-353a4bbed649 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:02:17 compute-0 podman[240030]: 2025-12-11 14:02:17.541590509 +0000 UTC m=+0.131053413 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 11 14:02:19 compute-0 podman[240051]: 2025-12-11 14:02:19.508136773 +0000 UTC m=+0.093005523 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 11 14:02:20 compute-0 nova_compute[189440]: 2025-12-11 14:02:20.184 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:20 compute-0 nova_compute[189440]: 2025-12-11 14:02:20.599 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:22 compute-0 podman[240070]: 2025-12-11 14:02:22.555072627 +0000 UTC m=+0.135093090 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:02:22 compute-0 podman[240071]: 2025-12-11 14:02:22.568207478 +0000 UTC m=+0.146706734 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 11 14:02:24 compute-0 podman[240106]: 2025-12-11 14:02:24.484175057 +0000 UTC m=+0.089905907 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:02:25 compute-0 nova_compute[189440]: 2025-12-11 14:02:25.186 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:25 compute-0 nova_compute[189440]: 2025-12-11 14:02:25.602 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:27 compute-0 podman[240124]: 2025-12-11 14:02:27.534388212 +0000 UTC m=+0.138086704 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 11 14:02:29 compute-0 podman[203650]: time="2025-12-11T14:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:02:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:02:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec 11 14:02:30 compute-0 nova_compute[189440]: 2025-12-11 14:02:30.189 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:30 compute-0 nova_compute[189440]: 2025-12-11 14:02:30.605 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: ERROR   14:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: ERROR   14:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: ERROR   14:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: ERROR   14:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: ERROR   14:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:02:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:02:31 compute-0 podman[240152]: 2025-12-11 14:02:31.533680738 +0000 UTC m=+0.130335595 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 14:02:33 compute-0 ovn_controller[97832]: 2025-12-11T14:02:33Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:ac:fb 192.168.0.20
Dec 11 14:02:33 compute-0 ovn_controller[97832]: 2025-12-11T14:02:33Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:ac:fb 192.168.0.20
Dec 11 14:02:34 compute-0 podman[240186]: 2025-12-11 14:02:34.489547767 +0000 UTC m=+0.084637118 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:02:35 compute-0 nova_compute[189440]: 2025-12-11 14:02:35.192 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:35 compute-0 nova_compute[189440]: 2025-12-11 14:02:35.607 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:40 compute-0 nova_compute[189440]: 2025-12-11 14:02:40.194 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:40 compute-0 nova_compute[189440]: 2025-12-11 14:02:40.609 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:41 compute-0 nova_compute[189440]: 2025-12-11 14:02:41.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:41 compute-0 nova_compute[189440]: 2025-12-11 14:02:41.238 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:02:41 compute-0 nova_compute[189440]: 2025-12-11 14:02:41.717 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:02:42 compute-0 nova_compute[189440]: 2025-12-11 14:02:42.719 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:42 compute-0 nova_compute[189440]: 2025-12-11 14:02:42.720 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:02:44 compute-0 nova_compute[189440]: 2025-12-11 14:02:44.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.198 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.231 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:45 compute-0 ovn_controller[97832]: 2025-12-11T14:02:45Z|00034|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Dec 11 14:02:45 compute-0 podman[240209]: 2025-12-11 14:02:45.476147755 +0000 UTC m=+0.074844220 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.482 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.527 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.528 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.528 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.528 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.613 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.640 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.737 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.739 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.814 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.816 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.879 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.881 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:02:45 compute-0 nova_compute[189440]: 2025-12-11 14:02:45.947 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:02:46 compute-0 nova_compute[189440]: 2025-12-11 14:02:46.307 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:02:46 compute-0 nova_compute[189440]: 2025-12-11 14:02:46.308 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5258MB free_disk=72.37405395507812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:02:46 compute-0 nova_compute[189440]: 2025-12-11 14:02:46.309 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:02:46 compute-0 nova_compute[189440]: 2025-12-11 14:02:46.309 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.429 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.430 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.430 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.479 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.515 189444 ERROR nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [req-46d07df7-8f0a-4d6c-902e-abf8005fed85] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 1bda6308-729f-4919-a8ba-89570b8721fc.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-46d07df7-8f0a-4d6c-902e-abf8005fed85"}]}#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.539 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.563 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.564 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.594 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.632 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:02:47 compute-0 nova_compute[189440]: 2025-12-11 14:02:47.681 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.002 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updated inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.002 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating resource provider 1bda6308-729f-4919-a8ba-89570b8721fc generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.003 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.071 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.072 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:02:48 compute-0 podman[240246]: 2025-12-11 14:02:48.536449296 +0000 UTC m=+0.130312404 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.826 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.826 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.826 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:02:48 compute-0 nova_compute[189440]: 2025-12-11 14:02:48.826 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:02:49 compute-0 nova_compute[189440]: 2025-12-11 14:02:49.838 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:02:49 compute-0 nova_compute[189440]: 2025-12-11 14:02:49.838 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:02:49 compute-0 nova_compute[189440]: 2025-12-11 14:02:49.839 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:02:49 compute-0 nova_compute[189440]: 2025-12-11 14:02:49.839 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:02:50 compute-0 nova_compute[189440]: 2025-12-11 14:02:50.201 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:50 compute-0 podman[240267]: 2025-12-11 14:02:50.5075212 +0000 UTC m=+0.104108953 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 11 14:02:50 compute-0 nova_compute[189440]: 2025-12-11 14:02:50.616 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.881 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.903 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.904 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.905 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.906 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.907 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.908 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.909 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.951 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid 82437023-b24d-48bf-af1c-d1957df4da67 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.953 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.954 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "82437023-b24d-48bf-af1c-d1957df4da67" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.955 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:52 compute-0 nova_compute[189440]: 2025-12-11 14:02:52.957 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:02:53 compute-0 nova_compute[189440]: 2025-12-11 14:02:53.075 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:02:53 compute-0 nova_compute[189440]: 2025-12-11 14:02:53.120 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "82437023-b24d-48bf-af1c-d1957df4da67" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:02:53 compute-0 podman[240288]: 2025-12-11 14:02:53.49371009 +0000 UTC m=+0.076382166 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, distribution-scope=public, release=1214.1726694543, config_id=edpm, container_name=kepler, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 14:02:53 compute-0 podman[240287]: 2025-12-11 14:02:53.49613537 +0000 UTC m=+0.087638992 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec 11 14:02:55 compute-0 nova_compute[189440]: 2025-12-11 14:02:55.203 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:55 compute-0 podman[240322]: 2025-12-11 14:02:55.492636706 +0000 UTC m=+0.090470641 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:02:55 compute-0 nova_compute[189440]: 2025-12-11 14:02:55.619 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:02:58 compute-0 podman[240341]: 2025-12-11 14:02:58.501120851 +0000 UTC m=+0.105008586 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:02:59 compute-0 podman[203650]: time="2025-12-11T14:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:02:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:02:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec 11 14:03:00 compute-0 nova_compute[189440]: 2025-12-11 14:03:00.206 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:00 compute-0 nova_compute[189440]: 2025-12-11 14:03:00.621 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: ERROR   14:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: ERROR   14:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: ERROR   14:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: ERROR   14:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: ERROR   14:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:03:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:03:02 compute-0 podman[240368]: 2025-12-11 14:03:02.525432065 +0000 UTC m=+0.110002210 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 14:03:03 compute-0 nova_compute[189440]: 2025-12-11 14:03:03.281 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:03 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:03.281 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:03:03 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:03.283 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:03:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:04.076 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:04.076 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:04.077 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:05 compute-0 nova_compute[189440]: 2025-12-11 14:03:05.210 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:05 compute-0 podman[240389]: 2025-12-11 14:03:05.493729214 +0000 UTC m=+0.091729677 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:03:05 compute-0 nova_compute[189440]: 2025-12-11 14:03:05.624 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.541 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.542 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.555 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.633 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.634 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.644 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.644 189444 INFO nova.compute.claims [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.772 189444 DEBUG nova.compute.provider_tree [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.787 189444 DEBUG nova.scheduler.client.report [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.811 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.812 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.862 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.863 189444 DEBUG nova.network.neutron [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.919 189444 INFO nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:03:07 compute-0 nova_compute[189440]: 2025-12-11 14:03:07.970 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.069 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.070 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.071 189444 INFO nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Creating image(s)#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.071 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.072 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.073 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.089 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.148 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.150 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.151 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.165 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.226 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.227 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.270 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.271 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.271 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.340 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.341 189444 DEBUG nova.virt.disk.api [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Checking if we can resize image /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.341 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.402 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.403 189444 DEBUG nova.virt.disk.api [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Cannot resize image /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.403 189444 DEBUG nova.objects.instance [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'migration_context' on Instance uuid 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.425 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.426 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.428 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.440 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.498 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.499 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.500 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.510 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.599 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.600 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.649 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.650 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.651 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.710 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.711 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.711 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Ensure instance console log exists: /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.712 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.712 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:08 compute-0 nova_compute[189440]: 2025-12-11 14:03:08.713 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:09 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:09.287 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:10 compute-0 nova_compute[189440]: 2025-12-11 14:03:10.212 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:10 compute-0 nova_compute[189440]: 2025-12-11 14:03:10.627 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:12 compute-0 nova_compute[189440]: 2025-12-11 14:03:12.936 189444 DEBUG nova.network.neutron [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Successfully updated port: f5b2dabe-ea06-4461-8450-3d306c4cd300 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:03:12 compute-0 nova_compute[189440]: 2025-12-11 14:03:12.956 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:03:12 compute-0 nova_compute[189440]: 2025-12-11 14:03:12.957 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:03:12 compute-0 nova_compute[189440]: 2025-12-11 14:03:12.957 189444 DEBUG nova.network.neutron [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:03:13 compute-0 nova_compute[189440]: 2025-12-11 14:03:13.035 189444 DEBUG nova.compute.manager [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-changed-f5b2dabe-ea06-4461-8450-3d306c4cd300 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:03:13 compute-0 nova_compute[189440]: 2025-12-11 14:03:13.035 189444 DEBUG nova.compute.manager [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Refreshing instance network info cache due to event network-changed-f5b2dabe-ea06-4461-8450-3d306c4cd300. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:03:13 compute-0 nova_compute[189440]: 2025-12-11 14:03:13.036 189444 DEBUG oslo_concurrency.lockutils [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:03:13 compute-0 nova_compute[189440]: 2025-12-11 14:03:13.100 189444 DEBUG nova.network.neutron [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.019 189444 DEBUG nova.network.neutron [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.041 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.041 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance network_info: |[{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.042 189444 DEBUG oslo_concurrency.lockutils [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.042 189444 DEBUG nova.network.neutron [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Refreshing network info cache for port f5b2dabe-ea06-4461-8450-3d306c4cd300 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.045 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Start _get_guest_xml network_info=[{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '714a3758-ec97-4149-8cfb-208787ab3704'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'size': 1, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.052 189444 WARNING nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.058 189444 DEBUG nova.virt.libvirt.host [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.059 189444 DEBUG nova.virt.libvirt.host [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.064 189444 DEBUG nova.virt.libvirt.host [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.065 189444 DEBUG nova.virt.libvirt.host [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.065 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.066 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:00:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='1d6c0fe6-4c75-4860-b5c4-bc55bee577e2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.066 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.067 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.067 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.068 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.068 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.068 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.069 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.069 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.070 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.070 189444 DEBUG nova.virt.hardware [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.074 189444 DEBUG nova.virt.libvirt.vif [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:03:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma',id=2,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-accqusqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:03:08Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTE3MTIyMjk2MjMzNzA5NDMxND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec 11 14:03:15 compute-0 nova_compute[189440]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTE3MTIyMjk2MjMzNzA5NDMxND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.074 189444 DEBUG nova.network.os_vif_util [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.075 189444 DEBUG nova.network.os_vif_util [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.076 189444 DEBUG nova.objects.instance [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.092 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <uuid>98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2</uuid>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <name>instance-00000002</name>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <memory>524288</memory>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:name>vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma</nova:name>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:03:15</nova:creationTime>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:flavor name="m1.small">
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:memory>512</nova:memory>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:ephemeral>1</nova:ephemeral>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:user uuid="26c7a9a5c1c0404bb144cd3cba8ecf9f">admin</nova:user>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:project uuid="9c30b62d3d094e1e8b410a2af9fd7d98">admin</nova:project>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="714a3758-ec97-4149-8cfb-208787ab3704"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        <nova:port uuid="f5b2dabe-ea06-4461-8450-3d306c4cd300">
Dec 11 14:03:15 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="192.168.0.184" ipVersion="4"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <system>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <entry name="serial">98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2</entry>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <entry name="uuid">98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2</entry>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </system>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <os>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </os>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <features>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </features>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <target dev="vdb" bus="virtio"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.config"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:fb:f0:71"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <target dev="tapf5b2dabe-ea"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/console.log" append="off"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <video>
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </video>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:03:15 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:03:15 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:03:15 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:03:15 compute-0 nova_compute[189440]: </domain>
Dec 11 14:03:15 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.093 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Preparing to wait for external event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.093 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.094 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.094 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.095 189444 DEBUG nova.virt.libvirt.vif [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:03:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma',id=2,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-accqusqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:03:08Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTE3MTIyMjk2MjMzNzA5NDMxND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec 11 14:03:15 compute-0 nova_compute[189440]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTE3MTIyMjk2MjMzNzA5NDMxND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.095 189444 DEBUG nova.network.os_vif_util [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.096 189444 DEBUG nova.network.os_vif_util [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.096 189444 DEBUG os_vif [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.097 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.097 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.098 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.103 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.103 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5b2dabe-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.104 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5b2dabe-ea, col_values=(('external_ids', {'iface-id': 'f5b2dabe-ea06-4461-8450-3d306c4cd300', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:f0:71', 'vm-uuid': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.106 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:15 compute-0 NetworkManager[56353]: <info>  [1765461795.1075] manager: (tapf5b2dabe-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.110 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.115 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.117 189444 INFO os_vif [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea')#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.181 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.182 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.182 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:03:15 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:03:15.074 189444 DEBUG nova.virt.libvirt.vif [None req-b17add6c-9b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.182 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No VIF found with MAC fa:16:3e:fb:f0:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.183 189444 INFO nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Using config drive#033[00m
Dec 11 14:03:15 compute-0 nova_compute[189440]: 2025-12-11 14:03:15.214 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:15 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:03:15.095 189444 DEBUG nova.virt.libvirt.vif [None req-b17add6c-9b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.032 189444 INFO nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Creating config drive at /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.config#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.039 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfvxlucn9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.163 189444 DEBUG oslo_concurrency.processutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfvxlucn9" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:16 compute-0 kernel: tapf5b2dabe-ea: entered promiscuous mode
Dec 11 14:03:16 compute-0 NetworkManager[56353]: <info>  [1765461796.2608] manager: (tapf5b2dabe-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.264 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:16 compute-0 ovn_controller[97832]: 2025-12-11T14:03:16Z|00035|binding|INFO|Claiming lport f5b2dabe-ea06-4461-8450-3d306c4cd300 for this chassis.
Dec 11 14:03:16 compute-0 ovn_controller[97832]: 2025-12-11T14:03:16Z|00036|binding|INFO|f5b2dabe-ea06-4461-8450-3d306c4cd300: Claiming fa:16:3e:fb:f0:71 192.168.0.184
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.275 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:f0:71 192.168.0.184'], port_security=['fa:16:3e:fb:f0:71 192.168.0.184'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5m7msfabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-port-sdeey5zmszca', 'neutron:cidrs': '192.168.0.184/24', 'neutron:device_id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5m7msfabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-port-sdeey5zmszca', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=f5b2dabe-ea06-4461-8450-3d306c4cd300) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.277 106686 INFO neutron.agent.ovn.metadata.agent [-] Port f5b2dabe-ea06-4461-8450-3d306c4cd300 in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d bound to our chassis#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.278 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.287 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:16 compute-0 ovn_controller[97832]: 2025-12-11T14:03:16Z|00037|binding|INFO|Setting lport f5b2dabe-ea06-4461-8450-3d306c4cd300 ovn-installed in OVS
Dec 11 14:03:16 compute-0 ovn_controller[97832]: 2025-12-11T14:03:16Z|00038|binding|INFO|Setting lport f5b2dabe-ea06-4461-8450-3d306c4cd300 up in Southbound
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.291 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.299 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4e9192-ee73-4128-b5d3-e7c955dd60fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:03:16 compute-0 systemd-machined[155778]: New machine qemu-2-instance-00000002.
Dec 11 14:03:16 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 11 14:03:16 compute-0 podman[240458]: 2025-12-11 14:03:16.330306941 +0000 UTC m=+0.090787714 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.335 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c86592-a157-43cf-9aac-8713208cd372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:03:16 compute-0 systemd-udevd[240494]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.342 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6e2283-ad86-4c86-87ed-c5e370172e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:03:16 compute-0 NetworkManager[56353]: <info>  [1765461796.3565] device (tapf5b2dabe-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:03:16 compute-0 NetworkManager[56353]: <info>  [1765461796.3571] device (tapf5b2dabe-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.373 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae82a40-176f-401f-b7ae-7ae2130f422b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.389 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[77b1b928-200d-4324-a5b6-1f1585e6053e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 17739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240505, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.404 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[90a996cf-ce12-4f00-b051-8cba4f63b6e0]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378129, 'tstamp': 378129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240506, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378131, 'tstamp': 378131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240506, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.406 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.408 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.409 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.410 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.410 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.411 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:03:16 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:03:16.412 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.621 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765461796.6206827, 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.622 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] VM Started (Lifecycle Event)#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.650 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.656 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765461796.6213386, 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.657 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.674 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.680 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:03:16 compute-0 nova_compute[189440]: 2025-12-11 14:03:16.698 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.042 189444 DEBUG nova.compute.manager [req-4d0ac312-17cb-4f9a-8d69-91ec5d370d07 req-c06e9c22-ae2d-46cc-bb4b-3b1c88b9082e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.042 189444 DEBUG oslo_concurrency.lockutils [req-4d0ac312-17cb-4f9a-8d69-91ec5d370d07 req-c06e9c22-ae2d-46cc-bb4b-3b1c88b9082e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.043 189444 DEBUG oslo_concurrency.lockutils [req-4d0ac312-17cb-4f9a-8d69-91ec5d370d07 req-c06e9c22-ae2d-46cc-bb4b-3b1c88b9082e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.043 189444 DEBUG oslo_concurrency.lockutils [req-4d0ac312-17cb-4f9a-8d69-91ec5d370d07 req-c06e9c22-ae2d-46cc-bb4b-3b1c88b9082e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.043 189444 DEBUG nova.compute.manager [req-4d0ac312-17cb-4f9a-8d69-91ec5d370d07 req-c06e9c22-ae2d-46cc-bb4b-3b1c88b9082e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Processing event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.044 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.048 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765461797.04817, 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.048 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.050 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.055 189444 INFO nova.virt.libvirt.driver [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance spawned successfully.#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.055 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.079 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.219 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.225 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.225 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.226 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.226 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.227 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.227 189444 DEBUG nova.virt.libvirt.driver [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.252 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.289 189444 INFO nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Took 9.22 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.289 189444 DEBUG nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.353 189444 INFO nova.compute.manager [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Took 9.75 seconds to build instance.#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.373 189444 DEBUG oslo_concurrency.lockutils [None req-b17add6c-9bcf-4bb4-9144-f51e4969db24 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.877 189444 DEBUG nova.network.neutron [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updated VIF entry in instance network info cache for port f5b2dabe-ea06-4461-8450-3d306c4cd300. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.878 189444 DEBUG nova.network.neutron [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:03:17 compute-0 nova_compute[189440]: 2025-12-11 14:03:17.898 189444 DEBUG oslo_concurrency.lockutils [req-7a2c5a95-0cec-4685-aa63-693d2edd8b93 req-56a9bd81-9c42-4814-9049-0a20cb207089 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:03:19 compute-0 nova_compute[189440]: 2025-12-11 14:03:19.270 189444 DEBUG nova.compute.manager [req-71ab850c-daaa-49d1-944a-74611362e856 req-58f7f159-8bf1-41bd-a1d5-b876a633d15d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:03:19 compute-0 nova_compute[189440]: 2025-12-11 14:03:19.271 189444 DEBUG oslo_concurrency.lockutils [req-71ab850c-daaa-49d1-944a-74611362e856 req-58f7f159-8bf1-41bd-a1d5-b876a633d15d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:19 compute-0 nova_compute[189440]: 2025-12-11 14:03:19.271 189444 DEBUG oslo_concurrency.lockutils [req-71ab850c-daaa-49d1-944a-74611362e856 req-58f7f159-8bf1-41bd-a1d5-b876a633d15d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:19 compute-0 nova_compute[189440]: 2025-12-11 14:03:19.272 189444 DEBUG oslo_concurrency.lockutils [req-71ab850c-daaa-49d1-944a-74611362e856 req-58f7f159-8bf1-41bd-a1d5-b876a633d15d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:19 compute-0 nova_compute[189440]: 2025-12-11 14:03:19.272 189444 DEBUG nova.compute.manager [req-71ab850c-daaa-49d1-944a-74611362e856 req-58f7f159-8bf1-41bd-a1d5-b876a633d15d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] No waiting events found dispatching network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:03:19 compute-0 nova_compute[189440]: 2025-12-11 14:03:19.273 189444 WARNING nova.compute.manager [req-71ab850c-daaa-49d1-944a-74611362e856 req-58f7f159-8bf1-41bd-a1d5-b876a633d15d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received unexpected event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 for instance with vm_state active and task_state None.#033[00m
Dec 11 14:03:19 compute-0 podman[240515]: 2025-12-11 14:03:19.520711589 +0000 UTC m=+0.113914325 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 14:03:20 compute-0 nova_compute[189440]: 2025-12-11 14:03:20.108 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:20 compute-0 nova_compute[189440]: 2025-12-11 14:03:20.217 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:21 compute-0 podman[240536]: 2025-12-11 14:03:21.470293158 +0000 UTC m=+0.071542767 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 14:03:24 compute-0 podman[240556]: 2025-12-11 14:03:24.461714059 +0000 UTC m=+0.066227808 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 11 14:03:24 compute-0 podman[240557]: 2025-12-11 14:03:24.516487898 +0000 UTC m=+0.110483973 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:03:25 compute-0 nova_compute[189440]: 2025-12-11 14:03:25.111 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:25 compute-0 nova_compute[189440]: 2025-12-11 14:03:25.221 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:26 compute-0 podman[240595]: 2025-12-11 14:03:26.48596219 +0000 UTC m=+0.089072063 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:03:29 compute-0 podman[240615]: 2025-12-11 14:03:29.568285496 +0000 UTC m=+0.163626482 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:03:29 compute-0 podman[203650]: time="2025-12-11T14:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:03:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:03:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec 11 14:03:30 compute-0 nova_compute[189440]: 2025-12-11 14:03:30.113 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:30 compute-0 nova_compute[189440]: 2025-12-11 14:03:30.224 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: ERROR   14:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: ERROR   14:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: ERROR   14:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: ERROR   14:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: ERROR   14:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:03:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:03:33 compute-0 podman[240641]: 2025-12-11 14:03:33.519527278 +0000 UTC m=+0.118678501 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 11 14:03:35 compute-0 nova_compute[189440]: 2025-12-11 14:03:35.118 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:35 compute-0 nova_compute[189440]: 2025-12-11 14:03:35.228 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:36 compute-0 podman[240662]: 2025-12-11 14:03:36.472634979 +0000 UTC m=+0.071132927 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:03:40 compute-0 nova_compute[189440]: 2025-12-11 14:03:40.122 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:40 compute-0 nova_compute[189440]: 2025-12-11 14:03:40.230 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:42 compute-0 nova_compute[189440]: 2025-12-11 14:03:42.428 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:42 compute-0 nova_compute[189440]: 2025-12-11 14:03:42.429 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.980 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.980 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:03:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:42.995 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 82437023-b24d-48bf-af1c-d1957df4da67 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:03:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:43.356 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/82437023-b24d-48bf-af1c-d1957df4da67 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.004 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Thu, 11 Dec 2025 14:03:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e98d8437-f3cb-4100-93d3-aada37615c4b x-openstack-request-id: req-e98d8437-f3cb-4100-93d3-aada37615c4b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.004 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "82437023-b24d-48bf-af1c-d1957df4da67", "name": "test_0", "status": "ACTIVE", "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "user_id": "26c7a9a5c1c0404bb144cd3cba8ecf9f", "metadata": {}, "hostId": "8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3", "image": {"id": "714a3758-ec97-4149-8cfb-208787ab3704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/714a3758-ec97-4149-8cfb-208787ab3704"}]}, "flavor": {"id": "1d6c0fe6-4c75-4860-b5c4-bc55bee577e2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/1d6c0fe6-4c75-4860-b5c4-bc55bee577e2"}]}, "created": "2025-12-11T14:01:48Z", "updated": "2025-12-11T14:01:58Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.20", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4a:ac:fb"}, {"version": 4, "addr": "192.168.122.192", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4a:ac:fb"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/82437023-b24d-48bf-af1c-d1957df4da67"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/82437023-b24d-48bf-af1c-d1957df4da67"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-11T14:01:58.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.004 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/82437023-b24d-48bf-af1c-d1957df4da67 used request id req-e98d8437-f3cb-4100-93d3-aada37615c4b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.006 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.010 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.011 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:03:44 compute-0 nova_compute[189440]: 2025-12-11 14:03:44.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.349 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Thu, 11 Dec 2025 14:03:44 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f183690d-5076-4a0d-a24d-7d820f9791df x-openstack-request-id: req-f183690d-5076-4a0d-a24d-7d820f9791df _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.350 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2", "name": "vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma", "status": "ACTIVE", "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "user_id": "26c7a9a5c1c0404bb144cd3cba8ecf9f", "metadata": {"metering.server_group": "f7b42205-1b4f-49eb-9f02-9c04957c72b4"}, "hostId": "8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3", "image": {"id": "714a3758-ec97-4149-8cfb-208787ab3704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/714a3758-ec97-4149-8cfb-208787ab3704"}]}, "flavor": {"id": "1d6c0fe6-4c75-4860-b5c4-bc55bee577e2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/1d6c0fe6-4c75-4860-b5c4-bc55bee577e2"}]}, "created": "2025-12-11T14:03:06Z", "updated": "2025-12-11T14:03:17Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.184", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fb:f0:71"}, {"version": 4, "addr": "192.168.122.195", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fb:f0:71"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-11T14:03:17.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.350 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 used request id req-f183690d-5076-4a0d-a24d-7d820f9791df request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.352 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'name': 'vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.353 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:03:44.354009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.361 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 82437023-b24d-48bf-af1c-d1957df4da67 / tape82f4978-3a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.362 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.366 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 / tapf5b2dabe-ea inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.367 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.368 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:03:44.369581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.394 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 35690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.430 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/cpu volume: 26780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:03:44.432662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.456 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.457 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.457 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.490 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.491 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.491 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:03:44.493420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.494 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.494 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.496 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:03:44.496574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.497 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.498 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.498 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2: ceilometer.compute.pollsters.NoVolumeException
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:03:44.499737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.500 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.501 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-11T14:03:44.502864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.503 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.504 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma>]
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.505 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.506 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:03:44.506089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.507 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.508 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.509 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.509 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:03:44.508930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.510 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.511 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.511 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:03:44.510678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.512 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.513 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.513 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.513 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:03:44.512588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.514 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.514 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:03:44.515807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.586 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.587 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.587 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.687 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 267377867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.687 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.688 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 909023 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:03:44.689083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.689 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.689 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.690 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.690 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.690 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.691 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.692 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.692 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.692 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:03:44.692038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.693 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.693 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.693 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.694 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.695 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:03:44.694532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.695 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.696 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.696 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.696 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.697 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.697 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.697 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.698 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.698 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.698 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.698 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.699 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.699 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.699 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.700 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.701 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.701 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:03:44.698026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:03:44.700898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:03:44.703014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.703 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.703 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.704 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.704 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.704 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.704 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.706 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.706 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.706 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:03:44.705947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-11T14:03:44.707263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.707 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma>]
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:03:44.708504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:03:44.709754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.709 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.710 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:03:44.711358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.712 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.713 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:03:44.712632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.714 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.714 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.714 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.715 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.716 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.716 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.716 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:03:44.714251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:03:44.715719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.717 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.717 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:03:44.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:03:45 compute-0 nova_compute[189440]: 2025-12-11 14:03:45.125 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:45 compute-0 nova_compute[189440]: 2025-12-11 14:03:45.232 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:46 compute-0 podman[240689]: 2025-12-11 14:03:46.468196865 +0000 UTC m=+0.064723111 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:03:46 compute-0 ovn_controller[97832]: 2025-12-11T14:03:46Z|00039|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.259 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.259 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.260 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.260 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.344 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.406 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.407 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.481 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.482 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.539 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.540 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.604 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.611 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.694 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.696 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.771 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.772 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.840 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.845 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:03:47 compute-0 nova_compute[189440]: 2025-12-11 14:03:47.901 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.278 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.279 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5092MB free_disk=72.37300491333008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.280 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.280 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.472 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.473 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.474 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.475 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.634 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.650 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.669 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:03:48 compute-0 nova_compute[189440]: 2025-12-11 14:03:48.670 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.665 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.666 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.668 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.669 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.856 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.858 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.858 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:03:49 compute-0 nova_compute[189440]: 2025-12-11 14:03:49.859 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:03:50 compute-0 nova_compute[189440]: 2025-12-11 14:03:50.129 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:50 compute-0 nova_compute[189440]: 2025-12-11 14:03:50.234 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:50 compute-0 podman[240738]: 2025-12-11 14:03:50.491648829 +0000 UTC m=+0.092103526 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 11 14:03:52 compute-0 ovn_controller[97832]: 2025-12-11T14:03:52Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:f0:71 192.168.0.184
Dec 11 14:03:52 compute-0 ovn_controller[97832]: 2025-12-11T14:03:52Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:f0:71 192.168.0.184
Dec 11 14:03:52 compute-0 podman[240763]: 2025-12-11 14:03:52.486964488 +0000 UTC m=+0.076844895 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 11 14:03:52 compute-0 nova_compute[189440]: 2025-12-11 14:03:52.877 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:03:53 compute-0 nova_compute[189440]: 2025-12-11 14:03:53.179 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:03:53 compute-0 nova_compute[189440]: 2025-12-11 14:03:53.191 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:03:53 compute-0 nova_compute[189440]: 2025-12-11 14:03:53.193 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:53 compute-0 nova_compute[189440]: 2025-12-11 14:03:53.194 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:53 compute-0 nova_compute[189440]: 2025-12-11 14:03:53.195 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:53 compute-0 nova_compute[189440]: 2025-12-11 14:03:53.196 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:03:55 compute-0 nova_compute[189440]: 2025-12-11 14:03:55.133 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:55 compute-0 nova_compute[189440]: 2025-12-11 14:03:55.239 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:03:55 compute-0 podman[240785]: 2025-12-11 14:03:55.259987299 +0000 UTC m=+0.074315754 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 11 14:03:55 compute-0 podman[240786]: 2025-12-11 14:03:55.266875236 +0000 UTC m=+0.078523046 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, name=ubi9, io.openshift.expose-services=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:03:57 compute-0 podman[240821]: 2025-12-11 14:03:57.522537932 +0000 UTC m=+0.111732802 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:03:59 compute-0 podman[203650]: time="2025-12-11T14:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:03:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:03:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4773 "" "Go-http-client/1.1"
Dec 11 14:04:00 compute-0 nova_compute[189440]: 2025-12-11 14:04:00.137 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:00 compute-0 nova_compute[189440]: 2025-12-11 14:04:00.240 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:00 compute-0 podman[240839]: 2025-12-11 14:04:00.513208985 +0000 UTC m=+0.101263928 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: ERROR   14:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: ERROR   14:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: ERROR   14:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: ERROR   14:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: ERROR   14:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:04:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:04:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:04:04.078 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:04:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:04:04.078 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:04:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:04:04.079 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:04:04 compute-0 podman[240863]: 2025-12-11 14:04:04.474163493 +0000 UTC m=+0.075690158 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal)
Dec 11 14:04:05 compute-0 nova_compute[189440]: 2025-12-11 14:04:05.141 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:05 compute-0 nova_compute[189440]: 2025-12-11 14:04:05.242 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:07 compute-0 podman[240885]: 2025-12-11 14:04:07.524000171 +0000 UTC m=+0.110969135 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:04:10 compute-0 nova_compute[189440]: 2025-12-11 14:04:10.146 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:10 compute-0 nova_compute[189440]: 2025-12-11 14:04:10.245 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:15 compute-0 nova_compute[189440]: 2025-12-11 14:04:15.151 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:15 compute-0 nova_compute[189440]: 2025-12-11 14:04:15.248 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:17 compute-0 podman[240907]: 2025-12-11 14:04:17.503296595 +0000 UTC m=+0.092867974 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:04:20 compute-0 nova_compute[189440]: 2025-12-11 14:04:20.174 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:20 compute-0 nova_compute[189440]: 2025-12-11 14:04:20.250 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:21 compute-0 podman[240931]: 2025-12-11 14:04:21.485522836 +0000 UTC m=+0.078626708 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:04:23 compute-0 podman[240950]: 2025-12-11 14:04:23.528746278 +0000 UTC m=+0.123893927 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:04:25 compute-0 nova_compute[189440]: 2025-12-11 14:04:25.178 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:25 compute-0 nova_compute[189440]: 2025-12-11 14:04:25.254 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:25 compute-0 podman[240971]: 2025-12-11 14:04:25.519401763 +0000 UTC m=+0.105713945 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:04:25 compute-0 podman[240970]: 2025-12-11 14:04:25.532978703 +0000 UTC m=+0.113746471 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 11 14:04:28 compute-0 podman[241007]: 2025-12-11 14:04:28.503219469 +0000 UTC m=+0.103846201 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251210)
Dec 11 14:04:29 compute-0 podman[203650]: time="2025-12-11T14:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:04:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:04:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec 11 14:04:30 compute-0 nova_compute[189440]: 2025-12-11 14:04:30.183 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:30 compute-0 nova_compute[189440]: 2025-12-11 14:04:30.256 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: ERROR   14:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: ERROR   14:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: ERROR   14:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: ERROR   14:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: ERROR   14:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:04:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:04:31 compute-0 podman[241027]: 2025-12-11 14:04:31.542681656 +0000 UTC m=+0.131626655 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 11 14:04:35 compute-0 nova_compute[189440]: 2025-12-11 14:04:35.187 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:35 compute-0 nova_compute[189440]: 2025-12-11 14:04:35.258 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:35 compute-0 podman[241053]: 2025-12-11 14:04:35.529248695 +0000 UTC m=+0.124797619 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Dec 11 14:04:38 compute-0 podman[241074]: 2025-12-11 14:04:38.525920204 +0000 UTC m=+0.106949567 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:04:40 compute-0 nova_compute[189440]: 2025-12-11 14:04:40.189 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:40 compute-0 nova_compute[189440]: 2025-12-11 14:04:40.261 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:42 compute-0 nova_compute[189440]: 2025-12-11 14:04:42.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:42 compute-0 nova_compute[189440]: 2025-12-11 14:04:42.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:04:44 compute-0 nova_compute[189440]: 2025-12-11 14:04:44.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:45 compute-0 nova_compute[189440]: 2025-12-11 14:04:45.195 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:45 compute-0 nova_compute[189440]: 2025-12-11 14:04:45.263 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:48 compute-0 nova_compute[189440]: 2025-12-11 14:04:48.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:48 compute-0 nova_compute[189440]: 2025-12-11 14:04:48.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:04:48 compute-0 podman[241098]: 2025-12-11 14:04:48.512615742 +0000 UTC m=+0.095422766 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:04:48 compute-0 nova_compute[189440]: 2025-12-11 14:04:48.887 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:04:48 compute-0 nova_compute[189440]: 2025-12-11 14:04:48.888 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:04:48 compute-0 nova_compute[189440]: 2025-12-11 14:04:48.889 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:04:50 compute-0 nova_compute[189440]: 2025-12-11 14:04:50.200 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:50 compute-0 nova_compute[189440]: 2025-12-11 14:04:50.265 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:50 compute-0 nova_compute[189440]: 2025-12-11 14:04:50.901 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.041 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.042 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.044 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.045 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.045 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.046 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.046 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.070 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.071 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.071 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.071 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.155 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.248 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.251 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.315 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.317 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.385 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.387 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.451 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.459 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.521 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.522 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.584 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.585 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.658 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.661 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:04:51 compute-0 nova_compute[189440]: 2025-12-11 14:04:51.725 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.100 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.102 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.35144805908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.102 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.102 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.197 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.197 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.197 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.198 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.269 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.288 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.289 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:04:52 compute-0 nova_compute[189440]: 2025-12-11 14:04:52.290 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:04:52 compute-0 podman[241147]: 2025-12-11 14:04:52.474202373 +0000 UTC m=+0.076064227 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 14:04:53 compute-0 nova_compute[189440]: 2025-12-11 14:04:53.285 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:53 compute-0 nova_compute[189440]: 2025-12-11 14:04:53.285 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:04:54 compute-0 podman[241171]: 2025-12-11 14:04:54.482711852 +0000 UTC m=+0.085954097 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:04:55 compute-0 nova_compute[189440]: 2025-12-11 14:04:55.203 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:55 compute-0 nova_compute[189440]: 2025-12-11 14:04:55.268 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:04:56 compute-0 podman[241190]: 2025-12-11 14:04:56.468476369 +0000 UTC m=+0.070318567 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0)
Dec 11 14:04:56 compute-0 podman[241189]: 2025-12-11 14:04:56.487413609 +0000 UTC m=+0.088016687 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 11 14:04:59 compute-0 podman[241224]: 2025-12-11 14:04:59.517885957 +0000 UTC m=+0.104750962 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:04:59 compute-0 podman[203650]: time="2025-12-11T14:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:04:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:04:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Dec 11 14:05:00 compute-0 nova_compute[189440]: 2025-12-11 14:05:00.209 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:00 compute-0 nova_compute[189440]: 2025-12-11 14:05:00.269 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: ERROR   14:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: ERROR   14:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: ERROR   14:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: ERROR   14:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: ERROR   14:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:05:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:05:02 compute-0 podman[241243]: 2025-12-11 14:05:02.516137114 +0000 UTC m=+0.115315140 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:05:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:05:04.079 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:05:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:05:04.080 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:05:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:05:04.080 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:05:05 compute-0 nova_compute[189440]: 2025-12-11 14:05:05.214 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:05 compute-0 nova_compute[189440]: 2025-12-11 14:05:05.271 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:06 compute-0 podman[241267]: 2025-12-11 14:05:06.48651241 +0000 UTC m=+0.091264816 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, version=9.6)
Dec 11 14:05:09 compute-0 podman[241289]: 2025-12-11 14:05:09.525666847 +0000 UTC m=+0.121488487 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:05:10 compute-0 nova_compute[189440]: 2025-12-11 14:05:10.218 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:10 compute-0 nova_compute[189440]: 2025-12-11 14:05:10.273 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:15 compute-0 nova_compute[189440]: 2025-12-11 14:05:15.221 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:15 compute-0 nova_compute[189440]: 2025-12-11 14:05:15.275 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 11 14:05:19 compute-0 podman[241313]: 2025-12-11 14:05:19.510171043 +0000 UTC m=+0.104479348 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:05:20 compute-0 nova_compute[189440]: 2025-12-11 14:05:20.224 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:20 compute-0 nova_compute[189440]: 2025-12-11 14:05:20.277 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:23 compute-0 podman[241336]: 2025-12-11 14:05:23.539657882 +0000 UTC m=+0.130896062 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 11 14:05:25 compute-0 nova_compute[189440]: 2025-12-11 14:05:25.228 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:25 compute-0 nova_compute[189440]: 2025-12-11 14:05:25.280 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:25 compute-0 podman[241356]: 2025-12-11 14:05:25.512776472 +0000 UTC m=+0.099501089 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:05:27 compute-0 podman[241378]: 2025-12-11 14:05:27.516020646 +0000 UTC m=+0.102730576 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:05:27 compute-0 podman[241379]: 2025-12-11 14:05:27.531527148 +0000 UTC m=+0.121199999 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Dec 11 14:05:29 compute-0 podman[203650]: time="2025-12-11T14:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:05:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:05:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec 11 14:05:30 compute-0 nova_compute[189440]: 2025-12-11 14:05:30.233 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:30 compute-0 nova_compute[189440]: 2025-12-11 14:05:30.282 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:30 compute-0 podman[241420]: 2025-12-11 14:05:30.475969878 +0000 UTC m=+0.074658202 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: ERROR   14:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: ERROR   14:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: ERROR   14:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: ERROR   14:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: ERROR   14:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:05:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:05:33 compute-0 podman[241440]: 2025-12-11 14:05:33.524392775 +0000 UTC m=+0.123268189 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:05:35 compute-0 nova_compute[189440]: 2025-12-11 14:05:35.237 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:35 compute-0 nova_compute[189440]: 2025-12-11 14:05:35.285 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:37 compute-0 podman[241464]: 2025-12-11 14:05:37.476745683 +0000 UTC m=+0.079777966 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec 11 14:05:40 compute-0 nova_compute[189440]: 2025-12-11 14:05:40.241 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:40 compute-0 nova_compute[189440]: 2025-12-11 14:05:40.286 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:40 compute-0 podman[241484]: 2025-12-11 14:05:40.467008253 +0000 UTC m=+0.069054668 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:05:42 compute-0 nova_compute[189440]: 2025-12-11 14:05:42.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:42 compute-0 nova_compute[189440]: 2025-12-11 14:05:42.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.980 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.981 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.990 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.993 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'name': 'vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.993 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:05:42.994254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:42.999 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.005 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes volume: 4492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.006 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:05:43.007362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.033 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 37310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.063 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/cpu volume: 85730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.066 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:05:43.067685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.101 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.102 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.103 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.132 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.132 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.133 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.136 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.138 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes.delta volume: 4492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.139 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.141 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.142 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/memory.usage volume: 49.17578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.144 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:05:43.136220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:05:43.141258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.145 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:05:43.145159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.146 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.147 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.149 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.150 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.150 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:05:43.149834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.151 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets volume: 37 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.153 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:05:43.153691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.155 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.158 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.158 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:05:43.157488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.162 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.162 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:05:43.161355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.163 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.164 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.164 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.165 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.166 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:05:43.167625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.254 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.256 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.257 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.358 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 386530042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.359 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 87643374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.359 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 69768051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.362 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.362 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.363 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.363 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:05:43.361706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.364 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.365 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.367 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.368 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:05:43.367092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.369 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.369 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.370 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.370 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.371 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.372 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.373 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.373 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.374 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.374 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:05:43.372315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.375 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.377 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.378 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.378 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.379 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:05:43.377666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.379 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 7708596857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.380 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 207693799 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.380 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.382 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.383 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:05:43.382816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.383 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.386 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.387 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.387 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.388 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.388 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.389 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:05:43.386184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.392 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.392 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:05:43.391581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.393 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.396 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.396 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:05:43.395072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:05:43.397334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.397 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.398 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.399 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:05:43.400038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.401 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:05:43.402195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.402 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.403 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.404 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:05:43.405108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.405 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.406 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.406 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.408 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:05:43.407694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.408 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.409 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.409 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.410 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.410 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:05:43.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:05:45 compute-0 nova_compute[189440]: 2025-12-11 14:05:45.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:45 compute-0 nova_compute[189440]: 2025-12-11 14:05:45.246 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:45 compute-0 nova_compute[189440]: 2025-12-11 14:05:45.288 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.239 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.240 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.988 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.989 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.990 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:05:48 compute-0 nova_compute[189440]: 2025-12-11 14:05:48.991 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:05:50 compute-0 nova_compute[189440]: 2025-12-11 14:05:50.251 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:50 compute-0 nova_compute[189440]: 2025-12-11 14:05:50.291 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:50 compute-0 podman[241510]: 2025-12-11 14:05:50.48447651 +0000 UTC m=+0.067461950 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:05:52 compute-0 nova_compute[189440]: 2025-12-11 14:05:52.964 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.562 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.563 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.563 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.564 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.564 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.565 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.565 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.597 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.598 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.599 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.599 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.884 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.962 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:53 compute-0 nova_compute[189440]: 2025-12-11 14:05:53.964 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.028 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.031 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.085 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.087 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.152 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.160 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.230 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.231 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.297 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.298 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.360 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.361 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.438 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:05:54 compute-0 podman[241553]: 2025-12-11 14:05:54.503238442 +0000 UTC m=+0.096779263 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.821 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.822 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5065MB free_disk=72.35144805908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.823 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.823 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.976 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.977 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.977 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:05:54 compute-0 nova_compute[189440]: 2025-12-11 14:05:54.977 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:05:55 compute-0 nova_compute[189440]: 2025-12-11 14:05:55.043 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:05:55 compute-0 nova_compute[189440]: 2025-12-11 14:05:55.060 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:05:55 compute-0 nova_compute[189440]: 2025-12-11 14:05:55.062 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:05:55 compute-0 nova_compute[189440]: 2025-12-11 14:05:55.063 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:05:55 compute-0 nova_compute[189440]: 2025-12-11 14:05:55.255 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:55 compute-0 nova_compute[189440]: 2025-12-11 14:05:55.293 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:05:56 compute-0 podman[241573]: 2025-12-11 14:05:56.493600916 +0000 UTC m=+0.084049928 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:05:57 compute-0 nova_compute[189440]: 2025-12-11 14:05:57.056 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:05:58 compute-0 podman[241593]: 2025-12-11 14:05:58.550491557 +0000 UTC m=+0.139173200 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 11 14:05:58 compute-0 podman[241594]: 2025-12-11 14:05:58.575748783 +0000 UTC m=+0.158838132 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release-0.7.12=)
Dec 11 14:05:59 compute-0 podman[203650]: time="2025-12-11T14:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:05:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:05:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec 11 14:06:00 compute-0 nova_compute[189440]: 2025-12-11 14:06:00.260 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:00 compute-0 nova_compute[189440]: 2025-12-11 14:06:00.297 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: ERROR   14:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: ERROR   14:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: ERROR   14:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: ERROR   14:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: ERROR   14:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:06:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:06:01 compute-0 podman[241630]: 2025-12-11 14:06:01.496568516 +0000 UTC m=+0.101192499 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible, org.label-schema.build-date=20251210)
Dec 11 14:06:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:06:04.080 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:06:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:06:04.080 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:06:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:06:04.081 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:06:04 compute-0 podman[241647]: 2025-12-11 14:06:04.532949723 +0000 UTC m=+0.134557690 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 14:06:05 compute-0 nova_compute[189440]: 2025-12-11 14:06:05.264 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:05 compute-0 nova_compute[189440]: 2025-12-11 14:06:05.299 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:08 compute-0 podman[241670]: 2025-12-11 14:06:08.526749625 +0000 UTC m=+0.126416654 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec 11 14:06:10 compute-0 nova_compute[189440]: 2025-12-11 14:06:10.267 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:10 compute-0 nova_compute[189440]: 2025-12-11 14:06:10.301 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:10 compute-0 podman[241691]: 2025-12-11 14:06:10.974118847 +0000 UTC m=+0.091992288 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:06:15 compute-0 nova_compute[189440]: 2025-12-11 14:06:15.272 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:15 compute-0 nova_compute[189440]: 2025-12-11 14:06:15.304 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:20 compute-0 nova_compute[189440]: 2025-12-11 14:06:20.277 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:20 compute-0 nova_compute[189440]: 2025-12-11 14:06:20.306 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:21 compute-0 podman[241716]: 2025-12-11 14:06:21.481493781 +0000 UTC m=+0.066309112 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:06:25 compute-0 nova_compute[189440]: 2025-12-11 14:06:25.282 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:25 compute-0 nova_compute[189440]: 2025-12-11 14:06:25.309 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:25 compute-0 podman[241739]: 2025-12-11 14:06:25.505157101 +0000 UTC m=+0.089689233 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 11 14:06:27 compute-0 podman[241757]: 2025-12-11 14:06:27.518677461 +0000 UTC m=+0.114859088 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:06:29 compute-0 podman[241778]: 2025-12-11 14:06:29.550423288 +0000 UTC m=+0.123355761 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.29.0, name=ubi9, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc.)
Dec 11 14:06:29 compute-0 podman[241777]: 2025-12-11 14:06:29.566431232 +0000 UTC m=+0.147821748 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:06:29 compute-0 podman[203650]: time="2025-12-11T14:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:06:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:06:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec 11 14:06:30 compute-0 nova_compute[189440]: 2025-12-11 14:06:30.286 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:30 compute-0 nova_compute[189440]: 2025-12-11 14:06:30.311 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: ERROR   14:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: ERROR   14:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: ERROR   14:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: ERROR   14:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: ERROR   14:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:06:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:06:32 compute-0 podman[241812]: 2025-12-11 14:06:32.512714717 +0000 UTC m=+0.103675729 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Dec 11 14:06:35 compute-0 nova_compute[189440]: 2025-12-11 14:06:35.289 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:35 compute-0 nova_compute[189440]: 2025-12-11 14:06:35.314 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:35 compute-0 podman[241831]: 2025-12-11 14:06:35.533515259 +0000 UTC m=+0.126322392 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 14:06:39 compute-0 podman[241856]: 2025-12-11 14:06:39.481364389 +0000 UTC m=+0.080201966 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350)
Dec 11 14:06:40 compute-0 nova_compute[189440]: 2025-12-11 14:06:40.294 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:40 compute-0 nova_compute[189440]: 2025-12-11 14:06:40.316 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:41 compute-0 podman[241876]: 2025-12-11 14:06:41.483017674 +0000 UTC m=+0.076125007 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:06:44 compute-0 nova_compute[189440]: 2025-12-11 14:06:44.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:44 compute-0 nova_compute[189440]: 2025-12-11 14:06:44.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:06:45 compute-0 nova_compute[189440]: 2025-12-11 14:06:45.297 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:45 compute-0 nova_compute[189440]: 2025-12-11 14:06:45.318 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:47 compute-0 nova_compute[189440]: 2025-12-11 14:06:47.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:48 compute-0 nova_compute[189440]: 2025-12-11 14:06:48.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:48 compute-0 nova_compute[189440]: 2025-12-11 14:06:48.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:06:49 compute-0 nova_compute[189440]: 2025-12-11 14:06:49.001 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:06:49 compute-0 nova_compute[189440]: 2025-12-11 14:06:49.002 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:06:49 compute-0 nova_compute[189440]: 2025-12-11 14:06:49.002 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:06:50 compute-0 nova_compute[189440]: 2025-12-11 14:06:50.302 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:50 compute-0 nova_compute[189440]: 2025-12-11 14:06:50.321 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:50 compute-0 nova_compute[189440]: 2025-12-11 14:06:50.374 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:06:50 compute-0 nova_compute[189440]: 2025-12-11 14:06:50.640 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:06:50 compute-0 nova_compute[189440]: 2025-12-11 14:06:50.641 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:06:51 compute-0 nova_compute[189440]: 2025-12-11 14:06:51.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:51 compute-0 nova_compute[189440]: 2025-12-11 14:06:51.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:51 compute-0 nova_compute[189440]: 2025-12-11 14:06:51.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.231 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.297 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.329 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.330 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.331 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.332 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.424 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 podman[241902]: 2025-12-11 14:06:52.475338036 +0000 UTC m=+0.075703168 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.494 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.495 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.581 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.583 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.646 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.648 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.712 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.719 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.781 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.784 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.848 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.849 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.909 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.911 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:06:52 compute-0 nova_compute[189440]: 2025-12-11 14:06:52.971 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.398 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.400 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5047MB free_disk=72.35144805908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.401 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.402 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.484 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.485 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.486 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.486 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.553 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.568 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.571 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:06:53 compute-0 nova_compute[189440]: 2025-12-11 14:06:53.571 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:06:54 compute-0 nova_compute[189440]: 2025-12-11 14:06:54.510 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:54 compute-0 nova_compute[189440]: 2025-12-11 14:06:54.511 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:06:55 compute-0 nova_compute[189440]: 2025-12-11 14:06:55.307 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:55 compute-0 nova_compute[189440]: 2025-12-11 14:06:55.323 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:06:56 compute-0 podman[241949]: 2025-12-11 14:06:56.474809935 +0000 UTC m=+0.076589319 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 11 14:06:58 compute-0 podman[241969]: 2025-12-11 14:06:58.523736875 +0000 UTC m=+0.111561589 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec 11 14:06:59 compute-0 podman[203650]: time="2025-12-11T14:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:06:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:06:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Dec 11 14:07:00 compute-0 nova_compute[189440]: 2025-12-11 14:07:00.311 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:00 compute-0 nova_compute[189440]: 2025-12-11 14:07:00.327 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:00 compute-0 podman[241989]: 2025-12-11 14:07:00.472511632 +0000 UTC m=+0.065924713 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:07:00 compute-0 podman[241990]: 2025-12-11 14:07:00.483108475 +0000 UTC m=+0.070392640 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, io.buildah.version=1.29.0, container_name=kepler, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., name=ubi9, version=9.4, architecture=x86_64)
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: ERROR   14:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: ERROR   14:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: ERROR   14:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: ERROR   14:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: ERROR   14:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:07:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:07:03 compute-0 podman[242027]: 2025-12-11 14:07:03.477832703 +0000 UTC m=+0.072745727 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:07:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:07:04.081 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:07:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:07:04.082 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:07:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:07:04.082 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:07:05 compute-0 nova_compute[189440]: 2025-12-11 14:07:05.315 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:05 compute-0 nova_compute[189440]: 2025-12-11 14:07:05.329 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:06 compute-0 podman[242047]: 2025-12-11 14:07:06.493093113 +0000 UTC m=+0.092362468 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:07:10 compute-0 nova_compute[189440]: 2025-12-11 14:07:10.321 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:10 compute-0 nova_compute[189440]: 2025-12-11 14:07:10.331 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:10 compute-0 podman[242073]: 2025-12-11 14:07:10.498260507 +0000 UTC m=+0.091716761 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Dec 11 14:07:12 compute-0 podman[242094]: 2025-12-11 14:07:12.511582703 +0000 UTC m=+0.108720410 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:07:15 compute-0 nova_compute[189440]: 2025-12-11 14:07:15.327 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:15 compute-0 nova_compute[189440]: 2025-12-11 14:07:15.333 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:20 compute-0 nova_compute[189440]: 2025-12-11 14:07:20.333 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:20 compute-0 nova_compute[189440]: 2025-12-11 14:07:20.339 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:23 compute-0 podman[242118]: 2025-12-11 14:07:23.459631069 +0000 UTC m=+0.061704404 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:07:25 compute-0 nova_compute[189440]: 2025-12-11 14:07:25.340 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:25 compute-0 nova_compute[189440]: 2025-12-11 14:07:25.343 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:27 compute-0 podman[242139]: 2025-12-11 14:07:27.535659827 +0000 UTC m=+0.119366754 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Dec 11 14:07:29 compute-0 podman[242159]: 2025-12-11 14:07:29.468607899 +0000 UTC m=+0.069541926 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 11 14:07:29 compute-0 podman[203650]: time="2025-12-11T14:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:07:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:07:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec 11 14:07:30 compute-0 nova_compute[189440]: 2025-12-11 14:07:30.345 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:30 compute-0 nova_compute[189440]: 2025-12-11 14:07:30.352 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: ERROR   14:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: ERROR   14:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: ERROR   14:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: ERROR   14:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: ERROR   14:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:07:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:07:31 compute-0 podman[242179]: 2025-12-11 14:07:31.481829617 +0000 UTC m=+0.085707422 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:07:31 compute-0 podman[242180]: 2025-12-11 14:07:31.48666227 +0000 UTC m=+0.081094925 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec 11 14:07:34 compute-0 podman[242215]: 2025-12-11 14:07:34.496481479 +0000 UTC m=+0.089991802 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 14:07:35 compute-0 nova_compute[189440]: 2025-12-11 14:07:35.354 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:07:37 compute-0 podman[242237]: 2025-12-11 14:07:37.509233176 +0000 UTC m=+0.112650127 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 11 14:07:40 compute-0 nova_compute[189440]: 2025-12-11 14:07:40.357 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:07:40 compute-0 nova_compute[189440]: 2025-12-11 14:07:40.358 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:40 compute-0 nova_compute[189440]: 2025-12-11 14:07:40.359 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec 11 14:07:40 compute-0 nova_compute[189440]: 2025-12-11 14:07:40.359 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 11 14:07:40 compute-0 nova_compute[189440]: 2025-12-11 14:07:40.360 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 11 14:07:40 compute-0 nova_compute[189440]: 2025-12-11 14:07:40.361 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:41 compute-0 podman[242263]: 2025-12-11 14:07:41.494162937 +0000 UTC m=+0.089968971 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.981 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.981 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.988 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.991 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'name': 'vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.992 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:07:42.992508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:42.997 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.001 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes volume: 4562 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.001 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:07:43.002192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.034 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 38970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.060 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/cpu volume: 205490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:07:43.061827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.086 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.086 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.087 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.114 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.115 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.115 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.116 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.117 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:07:43.116714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.118 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/memory.usage volume: 49.17578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.119 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.119 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.120 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.120 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:07:43.118394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:07:43.119879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.122 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.122 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets volume: 38 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.124 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.124 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:07:43.121977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.125 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:07:43.123912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:07:43.125638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.126 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.127 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.127 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.127 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.128 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.128 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.128 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.128 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:07:43.127277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:07:43.130190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.200 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.201 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.202 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.282 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 386530042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.283 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 87643374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.283 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 69768051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.284 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.285 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.285 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.285 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.285 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.286 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.287 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.287 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.287 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.287 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.288 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.288 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:07:43.284709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.289 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.290 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.290 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.290 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:07:43.286878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.291 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.292 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 7708596857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.292 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 207693799 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.292 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:07:43.289102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:07:43.291260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.293 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.294 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.295 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.295 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.295 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.295 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.296 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:07:43.293574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:07:43.294575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:07:43.296441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:07:43.297588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:07:43.298405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.300 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.301 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:07:43.299920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.303 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.303 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.303 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.303 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.304 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.304 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:07:43.300742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:07:43.301691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:07:43.303010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:07:43.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:07:43 compute-0 podman[242283]: 2025-12-11 14:07:43.510496177 +0000 UTC m=+0.115746639 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:07:44 compute-0 nova_compute[189440]: 2025-12-11 14:07:44.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:44 compute-0 nova_compute[189440]: 2025-12-11 14:07:44.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:07:45 compute-0 nova_compute[189440]: 2025-12-11 14:07:45.358 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:45 compute-0 nova_compute[189440]: 2025-12-11 14:07:45.362 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:48 compute-0 nova_compute[189440]: 2025-12-11 14:07:48.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:49 compute-0 nova_compute[189440]: 2025-12-11 14:07:49.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:49 compute-0 nova_compute[189440]: 2025-12-11 14:07:49.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:07:49 compute-0 nova_compute[189440]: 2025-12-11 14:07:49.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:07:50 compute-0 nova_compute[189440]: 2025-12-11 14:07:50.079 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:07:50 compute-0 nova_compute[189440]: 2025-12-11 14:07:50.080 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:07:50 compute-0 nova_compute[189440]: 2025-12-11 14:07:50.081 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:07:50 compute-0 nova_compute[189440]: 2025-12-11 14:07:50.082 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:07:50 compute-0 nova_compute[189440]: 2025-12-11 14:07:50.360 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:50 compute-0 nova_compute[189440]: 2025-12-11 14:07:50.363 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.184 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.201 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.202 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.203 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.203 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.204 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.217 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:07:52 compute-0 nova_compute[189440]: 2025-12-11 14:07:52.249 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.394 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.396 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.397 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.399 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.525 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.587 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.589 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.648 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.650 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.709 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.711 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.778 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.789 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.849 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.851 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.909 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.911 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.972 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:53 compute-0 nova_compute[189440]: 2025-12-11 14:07:53.974 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.041 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.414 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.416 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=72.35152816772461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.417 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.417 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:07:54 compute-0 podman[242332]: 2025-12-11 14:07:54.502902584 +0000 UTC m=+0.099967393 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.985 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.985 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.986 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:07:54 compute-0 nova_compute[189440]: 2025-12-11 14:07:54.986 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.007 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.025 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.026 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.044 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.064 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.128 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.362 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:55 compute-0 nova_compute[189440]: 2025-12-11 14:07:55.365 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:07:56 compute-0 nova_compute[189440]: 2025-12-11 14:07:56.037 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:07:56 compute-0 nova_compute[189440]: 2025-12-11 14:07:56.040 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:07:56 compute-0 nova_compute[189440]: 2025-12-11 14:07:56.040 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:07:56 compute-0 nova_compute[189440]: 2025-12-11 14:07:56.041 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:56 compute-0 nova_compute[189440]: 2025-12-11 14:07:56.042 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:07:57 compute-0 nova_compute[189440]: 2025-12-11 14:07:57.056 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:57 compute-0 nova_compute[189440]: 2025-12-11 14:07:57.057 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:58 compute-0 nova_compute[189440]: 2025-12-11 14:07:58.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:07:58 compute-0 podman[242357]: 2025-12-11 14:07:58.525406769 +0000 UTC m=+0.111124412 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:07:59 compute-0 podman[203650]: time="2025-12-11T14:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:07:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:07:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Dec 11 14:08:00 compute-0 nova_compute[189440]: 2025-12-11 14:08:00.366 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:00 compute-0 podman[242377]: 2025-12-11 14:08:00.484165961 +0000 UTC m=+0.084920424 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: ERROR   14:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: ERROR   14:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: ERROR   14:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: ERROR   14:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: ERROR   14:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:08:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:08:02 compute-0 podman[242397]: 2025-12-11 14:08:02.484728195 +0000 UTC m=+0.075268879 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 11 14:08:02 compute-0 podman[242396]: 2025-12-11 14:08:02.496496258 +0000 UTC m=+0.081002403 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:08:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:04.083 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:04.084 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:04.084 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:05 compute-0 nova_compute[189440]: 2025-12-11 14:08:05.368 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:05 compute-0 podman[242434]: 2025-12-11 14:08:05.475323498 +0000 UTC m=+0.066696831 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:08:08 compute-0 podman[242455]: 2025-12-11 14:08:08.560996979 +0000 UTC m=+0.155605975 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 11 14:08:10 compute-0 nova_compute[189440]: 2025-12-11 14:08:10.371 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:12 compute-0 podman[242481]: 2025-12-11 14:08:12.52221522 +0000 UTC m=+0.101737885 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6)
Dec 11 14:08:14 compute-0 podman[242502]: 2025-12-11 14:08:14.545235755 +0000 UTC m=+0.126391957 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:08:15 compute-0 nova_compute[189440]: 2025-12-11 14:08:15.374 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:08:15 compute-0 nova_compute[189440]: 2025-12-11 14:08:15.376 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:15 compute-0 nova_compute[189440]: 2025-12-11 14:08:15.377 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec 11 14:08:15 compute-0 nova_compute[189440]: 2025-12-11 14:08:15.378 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 11 14:08:15 compute-0 nova_compute[189440]: 2025-12-11 14:08:15.380 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 11 14:08:15 compute-0 nova_compute[189440]: 2025-12-11 14:08:15.382 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:20 compute-0 nova_compute[189440]: 2025-12-11 14:08:20.378 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:20 compute-0 nova_compute[189440]: 2025-12-11 14:08:20.383 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:25 compute-0 nova_compute[189440]: 2025-12-11 14:08:25.382 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:25 compute-0 podman[242529]: 2025-12-11 14:08:25.457622233 +0000 UTC m=+0.060591958 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:08:29 compute-0 podman[242552]: 2025-12-11 14:08:29.507524134 +0000 UTC m=+0.099639285 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:08:29 compute-0 podman[203650]: time="2025-12-11T14:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:08:29 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:29.746 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:08:29 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:29.748 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:08:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:08:29 compute-0 nova_compute[189440]: 2025-12-11 14:08:29.758 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec 11 14:08:30 compute-0 nova_compute[189440]: 2025-12-11 14:08:30.385 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:30 compute-0 nova_compute[189440]: 2025-12-11 14:08:30.388 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: ERROR   14:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: ERROR   14:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: ERROR   14:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: ERROR   14:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: ERROR   14:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:08:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:08:31 compute-0 podman[242572]: 2025-12-11 14:08:31.509312857 +0000 UTC m=+0.107040718 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 11 14:08:33 compute-0 podman[242592]: 2025-12-11 14:08:33.468693213 +0000 UTC m=+0.061174062 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 11 14:08:33 compute-0 podman[242593]: 2025-12-11 14:08:33.509583103 +0000 UTC m=+0.092170442 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 11 14:08:33 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:33.752 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:35 compute-0 nova_compute[189440]: 2025-12-11 14:08:35.387 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:35 compute-0 nova_compute[189440]: 2025-12-11 14:08:35.390 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:36 compute-0 podman[242629]: 2025-12-11 14:08:36.501067187 +0000 UTC m=+0.088986199 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.435 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.437 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.515 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.742 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.743 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.756 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.757 189444 INFO nova.compute.claims [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:08:38 compute-0 nova_compute[189440]: 2025-12-11 14:08:38.919 189444 DEBUG nova.compute.provider_tree [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.024 189444 DEBUG nova.scheduler.client.report [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.050 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.051 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.111 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.113 189444 DEBUG nova.network.neutron [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.262 189444 INFO nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.309 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.423 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.424 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.424 189444 INFO nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Creating image(s)#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.425 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.425 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.425 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.440 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.499 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.500 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.501 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.512 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:39 compute-0 podman[242647]: 2025-12-11 14:08:39.523509489 +0000 UTC m=+0.125110638 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.582 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.583 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.965 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk 1073741824" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.967 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:39 compute-0 nova_compute[189440]: 2025-12-11 14:08:39.968 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.045 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.048 189444 DEBUG nova.virt.disk.api [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Checking if we can resize image /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.049 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.119 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.121 189444 DEBUG nova.virt.disk.api [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Cannot resize image /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.122 189444 DEBUG nova.objects.instance [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'migration_context' on Instance uuid 081c0041-e68f-4fa9-8c7b-7139d21acf6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.241 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.243 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.245 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.273 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.341 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.343 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.344 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.358 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.390 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.430 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.431 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.498 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 1073741824" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.500 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.501 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.579 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.589 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.590 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Ensure instance console log exists: /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.592 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.593 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:40 compute-0 nova_compute[189440]: 2025-12-11 14:08:40.593 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.397 189444 DEBUG nova.network.neutron [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Successfully updated port: b755009c-68a9-44e9-96bc-c78ee69f1950 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.412 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.413 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquired lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.413 189444 DEBUG nova.network.neutron [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.511 189444 DEBUG nova.compute.manager [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-changed-b755009c-68a9-44e9-96bc-c78ee69f1950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.513 189444 DEBUG nova.compute.manager [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Refreshing instance network info cache due to event network-changed-b755009c-68a9-44e9-96bc-c78ee69f1950. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.513 189444 DEBUG oslo_concurrency.lockutils [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:08:41 compute-0 nova_compute[189440]: 2025-12-11 14:08:41.588 189444 DEBUG nova.network.neutron [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.365 189444 DEBUG nova.network.neutron [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updating instance_info_cache with network_info: [{"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.385 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Releasing lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.386 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Instance network_info: |[{"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.387 189444 DEBUG oslo_concurrency.lockutils [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.388 189444 DEBUG nova.network.neutron [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Refreshing network info cache for port b755009c-68a9-44e9-96bc-c78ee69f1950 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.391 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Start _get_guest_xml network_info=[{"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '714a3758-ec97-4149-8cfb-208787ab3704'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'size': 1, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.399 189444 WARNING nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.409 189444 DEBUG nova.virt.libvirt.host [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.410 189444 DEBUG nova.virt.libvirt.host [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.418 189444 DEBUG nova.virt.libvirt.host [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.419 189444 DEBUG nova.virt.libvirt.host [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.419 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.420 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:00:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='1d6c0fe6-4c75-4860-b5c4-bc55bee577e2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.421 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.421 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.422 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.422 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.423 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.424 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.424 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.425 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.425 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.426 189444 DEBUG nova.virt.hardware [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.430 189444 DEBUG nova.virt.libvirt.vif [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz',id=3,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-8y6uuoad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:08:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjQzNTc1NTAwNTY3NDM3NzAzMT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec 11 14:08:42 compute-0 nova_compute[189440]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjQzNTc1NTAwNTY3NDM3NzAzMT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=081c0041-e68f-4fa9-8c7b-7139d21acf6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.432 189444 DEBUG nova.network.os_vif_util [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.433 189444 DEBUG nova.network.os_vif_util [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.434 189444 DEBUG nova.objects.instance [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'pci_devices' on Instance uuid 081c0041-e68f-4fa9-8c7b-7139d21acf6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.459 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <uuid>081c0041-e68f-4fa9-8c7b-7139d21acf6b</uuid>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <name>instance-00000003</name>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <memory>524288</memory>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:name>vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz</nova:name>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:08:42</nova:creationTime>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:flavor name="m1.small">
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:memory>512</nova:memory>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:ephemeral>1</nova:ephemeral>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:user uuid="26c7a9a5c1c0404bb144cd3cba8ecf9f">admin</nova:user>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:project uuid="9c30b62d3d094e1e8b410a2af9fd7d98">admin</nova:project>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="714a3758-ec97-4149-8cfb-208787ab3704"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        <nova:port uuid="b755009c-68a9-44e9-96bc-c78ee69f1950">
Dec 11 14:08:42 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="192.168.0.45" ipVersion="4"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <system>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <entry name="serial">081c0041-e68f-4fa9-8c7b-7139d21acf6b</entry>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <entry name="uuid">081c0041-e68f-4fa9-8c7b-7139d21acf6b</entry>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </system>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <os>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </os>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <features>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </features>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <target dev="vdb" bus="virtio"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.config"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:5d:0f:5b"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <target dev="tapb755009c-68"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/console.log" append="off"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <video>
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </video>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:08:42 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:08:42 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:08:42 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:08:42 compute-0 nova_compute[189440]: </domain>
Dec 11 14:08:42 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.468 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Preparing to wait for external event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.468 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.469 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.469 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.470 189444 DEBUG nova.virt.libvirt.vif [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz',id=3,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-8y6uuoad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:08:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjQzNTc1NTAwNTY3NDM3NzAzMT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec 11 14:08:42 compute-0 nova_compute[189440]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjQzNTc1NTAwNTY3NDM3NzAzMT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=081c0041-e68f-4fa9-8c7b-7139d21acf6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.471 189444 DEBUG nova.network.os_vif_util [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.472 189444 DEBUG nova.network.os_vif_util [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.472 189444 DEBUG os_vif [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.473 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.474 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.474 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.478 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.479 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb755009c-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.480 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb755009c-68, col_values=(('external_ids', {'iface-id': 'b755009c-68a9-44e9-96bc-c78ee69f1950', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:0f:5b', 'vm-uuid': '081c0041-e68f-4fa9-8c7b-7139d21acf6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.481 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.483 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:08:42 compute-0 NetworkManager[56353]: <info>  [1765462122.4844] manager: (tapb755009c-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.490 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.492 189444 INFO os_vif [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68')#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.565 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.566 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.566 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.567 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No VIF found with MAC fa:16:3e:5d:0f:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:08:42 compute-0 nova_compute[189440]: 2025-12-11 14:08:42.567 189444 INFO nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Using config drive#033[00m
Dec 11 14:08:42 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:08:42.430 189444 DEBUG nova.virt.libvirt.vif [None req-99d70826-5a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:08:42 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:08:42.470 189444 DEBUG nova.virt.libvirt.vif [None req-99d70826-5a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.287 189444 INFO nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Creating config drive at /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.config#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.295 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv9u8sq73 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.420 189444 DEBUG oslo_concurrency.processutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv9u8sq73" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:43 compute-0 kernel: tapb755009c-68: entered promiscuous mode
Dec 11 14:08:43 compute-0 NetworkManager[56353]: <info>  [1765462123.5057] manager: (tapb755009c-68): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec 11 14:08:43 compute-0 podman[242705]: 2025-12-11 14:08:43.512149126 +0000 UTC m=+0.104458267 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public)
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.510 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:43 compute-0 ovn_controller[97832]: 2025-12-11T14:08:43Z|00040|binding|INFO|Claiming lport b755009c-68a9-44e9-96bc-c78ee69f1950 for this chassis.
Dec 11 14:08:43 compute-0 ovn_controller[97832]: 2025-12-11T14:08:43Z|00041|binding|INFO|b755009c-68a9-44e9-96bc-c78ee69f1950: Claiming fa:16:3e:5d:0f:5b 192.168.0.45
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.521 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:0f:5b 192.168.0.45'], port_security=['fa:16:3e:5d:0f:5b 192.168.0.45'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5m7msfabwkqt-ial5xpuq4kr3-ljplzuufq3xt-port-g5qtq5s5dan5', 'neutron:cidrs': '192.168.0.45/24', 'neutron:device_id': '081c0041-e68f-4fa9-8c7b-7139d21acf6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5m7msfabwkqt-ial5xpuq4kr3-ljplzuufq3xt-port-g5qtq5s5dan5', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=b755009c-68a9-44e9-96bc-c78ee69f1950) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.523 106686 INFO neutron.agent.ovn.metadata.agent [-] Port b755009c-68a9-44e9-96bc-c78ee69f1950 in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d bound to our chassis#033[00m
Dec 11 14:08:43 compute-0 ovn_controller[97832]: 2025-12-11T14:08:43Z|00042|binding|INFO|Setting lport b755009c-68a9-44e9-96bc-c78ee69f1950 ovn-installed in OVS
Dec 11 14:08:43 compute-0 ovn_controller[97832]: 2025-12-11T14:08:43Z|00043|binding|INFO|Setting lport b755009c-68a9-44e9-96bc-c78ee69f1950 up in Southbound
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.526 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.526 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.545 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[17338037-be8e-433d-9704-86dec1fd4634]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:08:43 compute-0 systemd-machined[155778]: New machine qemu-3-instance-00000003.
Dec 11 14:08:43 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec 11 14:08:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.588 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[281ff4bc-f0f4-4176-ac69-576b12ca9b42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.591 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[64fcea0a-7646-4d67-bdb6-529149e2ef1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:08:43 compute-0 systemd-udevd[242746]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:08:43 compute-0 NetworkManager[56353]: <info>  [1765462123.6096] device (tapb755009c-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:08:43 compute-0 NetworkManager[56353]: <info>  [1765462123.6102] device (tapb755009c-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:08:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.630 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[9703c4e4-1bce-4d5a-a396-cf3ccac51a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.651 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[01a80ab1-baa4-4a92-9d85-3a84f89bfa7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 24776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242775, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.666 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7efbfe3c-561a-4ceb-b5cf-0db2474d7e14]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378129, 'tstamp': 378129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242776, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378131, 'tstamp': 378131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242776, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.667 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.669 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.671 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.671 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.672 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:08:43 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:08:43.672 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.777 189444 DEBUG nova.network.neutron [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updated VIF entry in instance network info cache for port b755009c-68a9-44e9-96bc-c78ee69f1950. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.778 189444 DEBUG nova.network.neutron [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updating instance_info_cache with network_info: [{"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.795 189444 DEBUG oslo_concurrency.lockutils [req-ce5ba1e5-cc33-46f3-82f7-71303d68c3d9 req-b48cda7e-b1b7-448d-9ff5-eb967b07b9ff a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.834 189444 DEBUG nova.compute.manager [req-68b7b7b6-915f-4fab-9341-92221a56a9a1 req-d262cfb0-d4c5-480e-910f-792ac9ff4274 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.835 189444 DEBUG oslo_concurrency.lockutils [req-68b7b7b6-915f-4fab-9341-92221a56a9a1 req-d262cfb0-d4c5-480e-910f-792ac9ff4274 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.836 189444 DEBUG oslo_concurrency.lockutils [req-68b7b7b6-915f-4fab-9341-92221a56a9a1 req-d262cfb0-d4c5-480e-910f-792ac9ff4274 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.837 189444 DEBUG oslo_concurrency.lockutils [req-68b7b7b6-915f-4fab-9341-92221a56a9a1 req-d262cfb0-d4c5-480e-910f-792ac9ff4274 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:43 compute-0 nova_compute[189440]: 2025-12-11 14:08:43.837 189444 DEBUG nova.compute.manager [req-68b7b7b6-915f-4fab-9341-92221a56a9a1 req-d262cfb0-d4c5-480e-910f-792ac9ff4274 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Processing event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.016 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462124.0154965, 081c0041-e68f-4fa9-8c7b-7139d21acf6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.019 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] VM Started (Lifecycle Event)#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.023 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.035 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.039 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.047 189444 INFO nova.virt.libvirt.driver [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Instance spawned successfully.#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.049 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.051 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.077 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.078 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462124.0156322, 081c0041-e68f-4fa9-8c7b-7139d21acf6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.079 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.087 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.087 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.089 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.090 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.091 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.092 189444 DEBUG nova.virt.libvirt.driver [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.100 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.107 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462124.0317626, 081c0041-e68f-4fa9-8c7b-7139d21acf6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.107 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.131 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.138 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.162 189444 INFO nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Took 4.74 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.163 189444 DEBUG nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.165 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.243 189444 INFO nova.compute.manager [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Took 5.53 seconds to build instance.#033[00m
Dec 11 14:08:44 compute-0 nova_compute[189440]: 2025-12-11 14:08:44.261 189444 DEBUG oslo_concurrency.lockutils [None req-99d70826-5a2d-4840-89ca-2d5ba569d5d5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:44 compute-0 podman[242784]: 2025-12-11 14:08:44.816297302 +0000 UTC m=+0.126377747 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.300 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.301 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.394 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.914 189444 DEBUG nova.compute.manager [req-72dcfa31-366a-404c-b7a5-aa3c9300f77f req-94fc8933-3c36-40c3-9d0c-dc3b9690961e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.914 189444 DEBUG oslo_concurrency.lockutils [req-72dcfa31-366a-404c-b7a5-aa3c9300f77f req-94fc8933-3c36-40c3-9d0c-dc3b9690961e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.915 189444 DEBUG oslo_concurrency.lockutils [req-72dcfa31-366a-404c-b7a5-aa3c9300f77f req-94fc8933-3c36-40c3-9d0c-dc3b9690961e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.915 189444 DEBUG oslo_concurrency.lockutils [req-72dcfa31-366a-404c-b7a5-aa3c9300f77f req-94fc8933-3c36-40c3-9d0c-dc3b9690961e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.916 189444 DEBUG nova.compute.manager [req-72dcfa31-366a-404c-b7a5-aa3c9300f77f req-94fc8933-3c36-40c3-9d0c-dc3b9690961e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] No waiting events found dispatching network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:08:45 compute-0 nova_compute[189440]: 2025-12-11 14:08:45.916 189444 WARNING nova.compute.manager [req-72dcfa31-366a-404c-b7a5-aa3c9300f77f req-94fc8933-3c36-40c3-9d0c-dc3b9690961e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received unexpected event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 for instance with vm_state active and task_state None.#033[00m
Dec 11 14:08:47 compute-0 nova_compute[189440]: 2025-12-11 14:08:47.483 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:49 compute-0 nova_compute[189440]: 2025-12-11 14:08:49.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:50 compute-0 nova_compute[189440]: 2025-12-11 14:08:50.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:50 compute-0 nova_compute[189440]: 2025-12-11 14:08:50.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:08:50 compute-0 nova_compute[189440]: 2025-12-11 14:08:50.396 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:51 compute-0 nova_compute[189440]: 2025-12-11 14:08:51.090 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:08:51 compute-0 nova_compute[189440]: 2025-12-11 14:08:51.092 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:08:51 compute-0 nova_compute[189440]: 2025-12-11 14:08:51.093 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:08:52 compute-0 nova_compute[189440]: 2025-12-11 14:08:52.319 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:08:52 compute-0 nova_compute[189440]: 2025-12-11 14:08:52.338 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:08:52 compute-0 nova_compute[189440]: 2025-12-11 14:08:52.340 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:08:52 compute-0 nova_compute[189440]: 2025-12-11 14:08:52.341 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:52 compute-0 nova_compute[189440]: 2025-12-11 14:08:52.342 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:52 compute-0 nova_compute[189440]: 2025-12-11 14:08:52.487 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.259 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.284 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.285 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.285 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.286 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.403 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.470 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.473 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.544 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.547 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.615 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.617 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.687 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.702 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.768 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.770 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.831 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.833 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.924 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:53 compute-0 nova_compute[189440]: 2025-12-11 14:08:53.925 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.009 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.020 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.088 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.091 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.189 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.191 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.252 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.254 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.315 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.791 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.794 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4935MB free_disk=72.35062408447266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.795 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.796 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.934 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.934 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.935 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.935 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:08:54 compute-0 nova_compute[189440]: 2025-12-11 14:08:54.935 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:08:55 compute-0 nova_compute[189440]: 2025-12-11 14:08:55.122 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:08:55 compute-0 nova_compute[189440]: 2025-12-11 14:08:55.194 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:08:55 compute-0 nova_compute[189440]: 2025-12-11 14:08:55.253 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:08:55 compute-0 nova_compute[189440]: 2025-12-11 14:08:55.254 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:08:55 compute-0 nova_compute[189440]: 2025-12-11 14:08:55.398 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:56 compute-0 nova_compute[189440]: 2025-12-11 14:08:56.251 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:56 compute-0 nova_compute[189440]: 2025-12-11 14:08:56.253 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:56 compute-0 podman[242844]: 2025-12-11 14:08:56.501965594 +0000 UTC m=+0.089334476 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:08:57 compute-0 nova_compute[189440]: 2025-12-11 14:08:57.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:08:57 compute-0 nova_compute[189440]: 2025-12-11 14:08:57.490 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:08:59 compute-0 podman[203650]: time="2025-12-11T14:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:08:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:08:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Dec 11 14:09:00 compute-0 nova_compute[189440]: 2025-12-11 14:09:00.401 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:00 compute-0 podman[242868]: 2025-12-11 14:09:00.522658606 +0000 UTC m=+0.106079685 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: ERROR   14:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: ERROR   14:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: ERROR   14:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: ERROR   14:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: ERROR   14:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:09:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:09:02 compute-0 nova_compute[189440]: 2025-12-11 14:09:02.494 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:02 compute-0 podman[242890]: 2025-12-11 14:09:02.503801379 +0000 UTC m=+0.082832095 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 11 14:09:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:09:04.084 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:09:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:09:04.084 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:09:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:09:04.085 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:09:04 compute-0 podman[242909]: 2025-12-11 14:09:04.48317022 +0000 UTC m=+0.072360842 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 14:09:04 compute-0 podman[242910]: 2025-12-11 14:09:04.523296893 +0000 UTC m=+0.104256844 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0)
Dec 11 14:09:05 compute-0 nova_compute[189440]: 2025-12-11 14:09:05.406 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:07 compute-0 nova_compute[189440]: 2025-12-11 14:09:07.497 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:07 compute-0 podman[242946]: 2025-12-11 14:09:07.525357202 +0000 UTC m=+0.114949622 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 11 14:09:10 compute-0 nova_compute[189440]: 2025-12-11 14:09:10.409 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:10 compute-0 podman[242965]: 2025-12-11 14:09:10.522757702 +0000 UTC m=+0.115585586 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:09:12 compute-0 nova_compute[189440]: 2025-12-11 14:09:12.500 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:13 compute-0 ovn_controller[97832]: 2025-12-11T14:09:13Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 11 14:09:14 compute-0 podman[242991]: 2025-12-11 14:09:14.537024435 +0000 UTC m=+0.118064685 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, architecture=x86_64, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Dec 11 14:09:15 compute-0 nova_compute[189440]: 2025-12-11 14:09:15.411 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:15 compute-0 podman[243012]: 2025-12-11 14:09:15.469734042 +0000 UTC m=+0.065805830 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:09:17 compute-0 ovn_controller[97832]: 2025-12-11T14:09:17Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:0f:5b 192.168.0.45
Dec 11 14:09:17 compute-0 ovn_controller[97832]: 2025-12-11T14:09:17Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:0f:5b 192.168.0.45
Dec 11 14:09:17 compute-0 nova_compute[189440]: 2025-12-11 14:09:17.504 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:20 compute-0 nova_compute[189440]: 2025-12-11 14:09:20.415 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:22 compute-0 nova_compute[189440]: 2025-12-11 14:09:22.508 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:25 compute-0 nova_compute[189440]: 2025-12-11 14:09:25.420 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:27 compute-0 podman[243045]: 2025-12-11 14:09:27.505607129 +0000 UTC m=+0.103631528 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:09:27 compute-0 nova_compute[189440]: 2025-12-11 14:09:27.520 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:29 compute-0 podman[203650]: time="2025-12-11T14:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:09:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:09:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Dec 11 14:09:30 compute-0 nova_compute[189440]: 2025-12-11 14:09:30.423 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: ERROR   14:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: ERROR   14:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: ERROR   14:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: ERROR   14:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: ERROR   14:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:09:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:09:31 compute-0 podman[243069]: 2025-12-11 14:09:31.536097073 +0000 UTC m=+0.124681718 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:09:32 compute-0 nova_compute[189440]: 2025-12-11 14:09:32.524 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:33 compute-0 podman[243088]: 2025-12-11 14:09:33.49724106 +0000 UTC m=+0.096079265 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 14:09:35 compute-0 nova_compute[189440]: 2025-12-11 14:09:35.424 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:35 compute-0 podman[243107]: 2025-12-11 14:09:35.512005965 +0000 UTC m=+0.094675971 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 11 14:09:35 compute-0 podman[243108]: 2025-12-11 14:09:35.565868569 +0000 UTC m=+0.136738020 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0)
Dec 11 14:09:37 compute-0 nova_compute[189440]: 2025-12-11 14:09:37.530 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:38 compute-0 podman[243144]: 2025-12-11 14:09:38.553648921 +0000 UTC m=+0.139573419 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210)
Dec 11 14:09:40 compute-0 nova_compute[189440]: 2025-12-11 14:09:40.428 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:41 compute-0 podman[243164]: 2025-12-11 14:09:41.545528981 +0000 UTC m=+0.138311978 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Dec 11 14:09:42 compute-0 nova_compute[189440]: 2025-12-11 14:09:42.535 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.982 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.983 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.991 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:09:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:42.992 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/081c0041-e68f-4fa9-8c7b-7139d21acf6b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.471 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Thu, 11 Dec 2025 14:09:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8d0496f5-8cde-4a90-ad96-dfef2dc13dbb x-openstack-request-id: req-8d0496f5-8cde-4a90-ad96-dfef2dc13dbb _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.471 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "081c0041-e68f-4fa9-8c7b-7139d21acf6b", "name": "vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz", "status": "ACTIVE", "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "user_id": "26c7a9a5c1c0404bb144cd3cba8ecf9f", "metadata": {"metering.server_group": "f7b42205-1b4f-49eb-9f02-9c04957c72b4"}, "hostId": "8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3", "image": {"id": "714a3758-ec97-4149-8cfb-208787ab3704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/714a3758-ec97-4149-8cfb-208787ab3704"}]}, "flavor": {"id": "1d6c0fe6-4c75-4860-b5c4-bc55bee577e2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/1d6c0fe6-4c75-4860-b5c4-bc55bee577e2"}]}, "created": "2025-12-11T14:08:35Z", "updated": "2025-12-11T14:08:44Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.45", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5d:0f:5b"}, {"version": 4, "addr": "192.168.122.242", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5d:0f:5b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/081c0041-e68f-4fa9-8c7b-7139d21acf6b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/081c0041-e68f-4fa9-8c7b-7139d21acf6b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-11T14:08:44.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.472 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/081c0041-e68f-4fa9-8c7b-7139d21acf6b used request id req-8d0496f5-8cde-4a90-ad96-dfef2dc13dbb request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.474 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '081c0041-e68f-4fa9-8c7b-7139d21acf6b', 'name': 'vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.480 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.484 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'name': 'vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.485 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:09:43.485549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.492 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 081c0041-e68f-4fa9-8c7b-7139d21acf6b / tapb755009c-68 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.492 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes volume: 2118 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.499 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.507 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes volume: 4632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.508 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:09:43.509417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.558 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/cpu volume: 33320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.591 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 40660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.620 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/cpu volume: 325630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.621 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:09:43.622445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.650 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.650 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.651 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.680 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.680 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.681 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.717 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.717 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.717 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.718 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.720 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/memory.usage volume: 49.6015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.720 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.720 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/memory.usage volume: 49.11328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.721 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:09:43.718826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:09:43.720012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:09:43.721239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.722 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz>]
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-11T14:09:43.723148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:09:43.724061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.725 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:09:43.725554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:09:43.726886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.727 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.727 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.727 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:09:43.728290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.728 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.729 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.729 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.729 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.729 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.729 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.729 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:09:43.730762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.804 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 500931517 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.805 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 79030432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.805 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 61428410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.867 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.868 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.868 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.938 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 386530042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.938 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 87643374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.939 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 69768051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:09:43.940635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.941 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.941 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.942 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.942 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.942 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.942 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.943 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.943 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.943 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.944 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.945 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.945 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.946 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.946 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.947 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:09:43.945280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.947 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.947 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.948 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.948 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.948 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.949 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.949 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.949 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.950 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:09:43.949991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.951 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.951 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.951 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.952 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.952 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.952 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.953 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.953 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.954 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.955 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 1743953967 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.955 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 10306999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.955 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.956 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.956 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.957 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.957 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 7708596857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:09:43.954761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.958 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 207693799 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.958 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.959 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:09:43.959638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.960 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.960 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:09:43.961979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.962 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.962 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.963 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.963 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.963 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.964 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.964 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.964 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.964 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.965 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.965 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.966 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.966 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.966 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.967 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:09:43.965930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-11T14:09:43.968312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.968 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz>]
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.969 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.969 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.969 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.969 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:09:43.969442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.971 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.971 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:09:43.970913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.971 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.971 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.972 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.973 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:09:43.972524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.974 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:09:43.973979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.974 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.974 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.975 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.976 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.976 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.976 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.976 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:09:43.975609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.977 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.978 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.978 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.978 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.978 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.979 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:09:43.977554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.979 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.979 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.980 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.980 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.984 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.984 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.984 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.984 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.984 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.984 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:09:43.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:09:44 compute-0 podman[243192]: 2025-12-11 14:09:44.829544021 +0000 UTC m=+0.105818142 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64)
Dec 11 14:09:45 compute-0 nova_compute[189440]: 2025-12-11 14:09:45.429 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:46 compute-0 podman[243211]: 2025-12-11 14:09:46.511324078 +0000 UTC m=+0.094397425 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:09:47 compute-0 nova_compute[189440]: 2025-12-11 14:09:47.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:47 compute-0 nova_compute[189440]: 2025-12-11 14:09:47.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:09:47 compute-0 nova_compute[189440]: 2025-12-11 14:09:47.540 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:50 compute-0 nova_compute[189440]: 2025-12-11 14:09:50.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:50 compute-0 nova_compute[189440]: 2025-12-11 14:09:50.432 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.821 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.821 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.822 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:09:51 compute-0 nova_compute[189440]: 2025-12-11 14:09:51.823 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:09:52 compute-0 nova_compute[189440]: 2025-12-11 14:09:52.543 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:53 compute-0 nova_compute[189440]: 2025-12-11 14:09:53.354 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:09:53 compute-0 nova_compute[189440]: 2025-12-11 14:09:53.425 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:09:53 compute-0 nova_compute[189440]: 2025-12-11 14:09:53.426 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:09:53 compute-0 nova_compute[189440]: 2025-12-11 14:09:53.427 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:53 compute-0 nova_compute[189440]: 2025-12-11 14:09:53.428 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:54 compute-0 nova_compute[189440]: 2025-12-11 14:09:54.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:54 compute-0 nova_compute[189440]: 2025-12-11 14:09:54.634 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:09:54 compute-0 nova_compute[189440]: 2025-12-11 14:09:54.635 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:09:54 compute-0 nova_compute[189440]: 2025-12-11 14:09:54.635 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:09:54 compute-0 nova_compute[189440]: 2025-12-11 14:09:54.636 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.239 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.302 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.304 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.365 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.367 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.430 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.432 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.449 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.504 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.512 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.590 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.592 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.654 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.656 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.713 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.714 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.769 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.780 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.844 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.845 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.904 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.906 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.972 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:55 compute-0 nova_compute[189440]: 2025-12-11 14:09:55.974 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.036 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.369 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.371 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4884MB free_disk=72.32905578613281GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.372 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.373 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.453 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.454 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.455 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.455 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.456 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.542 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.565 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.570 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:09:56 compute-0 nova_compute[189440]: 2025-12-11 14:09:56.571 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:09:57 compute-0 nova_compute[189440]: 2025-12-11 14:09:57.547 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:09:58 compute-0 podman[243272]: 2025-12-11 14:09:58.462157477 +0000 UTC m=+0.068855258 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:09:58 compute-0 nova_compute[189440]: 2025-12-11 14:09:58.569 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:58 compute-0 nova_compute[189440]: 2025-12-11 14:09:58.573 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:58 compute-0 nova_compute[189440]: 2025-12-11 14:09:58.573 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:09:59 compute-0 podman[203650]: time="2025-12-11T14:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:09:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:09:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Dec 11 14:10:00 compute-0 nova_compute[189440]: 2025-12-11 14:10:00.437 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: ERROR   14:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: ERROR   14:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: ERROR   14:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: ERROR   14:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: ERROR   14:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:10:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:10:02 compute-0 podman[243296]: 2025-12-11 14:10:02.492916396 +0000 UTC m=+0.084968616 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3)
Dec 11 14:10:02 compute-0 nova_compute[189440]: 2025-12-11 14:10:02.551 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:04.085 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:04.085 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:04.086 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:04 compute-0 podman[243316]: 2025-12-11 14:10:04.495109787 +0000 UTC m=+0.085542281 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 11 14:10:05 compute-0 nova_compute[189440]: 2025-12-11 14:10:05.439 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:06 compute-0 podman[243335]: 2025-12-11 14:10:06.456307697 +0000 UTC m=+0.060703920 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:10:06 compute-0 podman[243336]: 2025-12-11 14:10:06.481889625 +0000 UTC m=+0.080249102 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 11 14:10:07 compute-0 nova_compute[189440]: 2025-12-11 14:10:07.554 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:09 compute-0 podman[243374]: 2025-12-11 14:10:09.520873066 +0000 UTC m=+0.116390437 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 14:10:10 compute-0 nova_compute[189440]: 2025-12-11 14:10:10.445 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:12 compute-0 podman[243393]: 2025-12-11 14:10:12.553889172 +0000 UTC m=+0.153803383 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 11 14:10:12 compute-0 nova_compute[189440]: 2025-12-11 14:10:12.557 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:15 compute-0 nova_compute[189440]: 2025-12-11 14:10:15.450 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:15 compute-0 podman[243418]: 2025-12-11 14:10:15.538652541 +0000 UTC m=+0.119620096 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec 11 14:10:17 compute-0 podman[243438]: 2025-12-11 14:10:17.476424653 +0000 UTC m=+0.071125303 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:10:17 compute-0 nova_compute[189440]: 2025-12-11 14:10:17.560 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:20 compute-0 nova_compute[189440]: 2025-12-11 14:10:20.452 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:22 compute-0 nova_compute[189440]: 2025-12-11 14:10:22.565 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:25 compute-0 nova_compute[189440]: 2025-12-11 14:10:25.455 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:27 compute-0 nova_compute[189440]: 2025-12-11 14:10:27.569 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:29 compute-0 podman[243462]: 2025-12-11 14:10:29.482021246 +0000 UTC m=+0.073583021 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:10:29 compute-0 podman[203650]: time="2025-12-11T14:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:10:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:10:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec 11 14:10:30 compute-0 nova_compute[189440]: 2025-12-11 14:10:30.456 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: ERROR   14:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: ERROR   14:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: ERROR   14:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: ERROR   14:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: ERROR   14:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:10:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:10:31 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:31.635 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:10:31 compute-0 nova_compute[189440]: 2025-12-11 14:10:31.636 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:31 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:31.637 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:10:32 compute-0 nova_compute[189440]: 2025-12-11 14:10:32.573 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:33 compute-0 podman[243488]: 2025-12-11 14:10:33.491050901 +0000 UTC m=+0.094669892 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible)
Dec 11 14:10:35 compute-0 nova_compute[189440]: 2025-12-11 14:10:35.462 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:35 compute-0 podman[243505]: 2025-12-11 14:10:35.492071573 +0000 UTC m=+0.094323243 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:10:37 compute-0 podman[243525]: 2025-12-11 14:10:37.526532875 +0000 UTC m=+0.104485540 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Dec 11 14:10:37 compute-0 podman[243526]: 2025-12-11 14:10:37.553153399 +0000 UTC m=+0.134200288 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, maintainer=Red Hat, Inc., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, version=9.4, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Dec 11 14:10:37 compute-0 nova_compute[189440]: 2025-12-11 14:10:37.575 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:37 compute-0 nova_compute[189440]: 2025-12-11 14:10:37.809 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:37 compute-0 nova_compute[189440]: 2025-12-11 14:10:37.810 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:37 compute-0 nova_compute[189440]: 2025-12-11 14:10:37.997 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.169 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.170 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.183 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.184 189444 INFO nova.compute.claims [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.382 189444 DEBUG nova.compute.provider_tree [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.414 189444 DEBUG nova.scheduler.client.report [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.446 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.456 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.518 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.518 189444 DEBUG nova.network.neutron [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.545 189444 INFO nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.620 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.732 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.740 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.741 189444 INFO nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Creating image(s)#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.742 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.742 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.744 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.767 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.846 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.848 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.849 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.868 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.942 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.944 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:38 compute-0 nova_compute[189440]: 2025-12-11 14:10:38.991 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031,backing_fmt=raw /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.000 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.001 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.078 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.079 189444 DEBUG nova.virt.disk.api [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Checking if we can resize image /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.080 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.135 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.137 189444 DEBUG nova.virt.disk.api [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Cannot resize image /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.137 189444 DEBUG nova.objects.instance [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'migration_context' on Instance uuid 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.231 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.232 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.233 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.246 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.323 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.324 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.325 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.336 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.399 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.401 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.444 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.445 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.446 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.504 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.511 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.512 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Ensure instance console log exists: /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.512 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.513 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.513 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:39 compute-0 nova_compute[189440]: 2025-12-11 14:10:39.923 189444 DEBUG nova.network.neutron [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Successfully updated port: ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.002 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.003 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquired lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.004 189444 DEBUG nova.network.neutron [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.014 189444 DEBUG nova.compute.manager [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-changed-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.015 189444 DEBUG nova.compute.manager [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Refreshing instance network info cache due to event network-changed-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.016 189444 DEBUG oslo_concurrency.lockutils [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.335 189444 DEBUG nova.network.neutron [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.463 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:40 compute-0 podman[243590]: 2025-12-11 14:10:40.49792804 +0000 UTC m=+0.089562849 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 11 14:10:40 compute-0 nova_compute[189440]: 2025-12-11 14:10:40.983 189444 DEBUG nova.network.neutron [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.020 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Releasing lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.021 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Instance network_info: |[{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.023 189444 DEBUG oslo_concurrency.lockutils [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.024 189444 DEBUG nova.network.neutron [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Refreshing network info cache for port ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.031 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Start _get_guest_xml network_info=[{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '714a3758-ec97-4149-8cfb-208787ab3704'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'size': 1, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.044 189444 WARNING nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.060 189444 DEBUG nova.virt.libvirt.host [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.062 189444 DEBUG nova.virt.libvirt.host [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.071 189444 DEBUG nova.virt.libvirt.host [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.072 189444 DEBUG nova.virt.libvirt.host [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.073 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.075 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:00:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='1d6c0fe6-4c75-4860-b5c4-bc55bee577e2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:00:24Z,direct_url=<?>,disk_format='qcow2',id=714a3758-ec97-4149-8cfb-208787ab3704,min_disk=0,min_ram=0,name='cirros',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:00:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.076 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.077 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.078 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.080 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.082 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.089 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.090 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.093 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.094 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.095 189444 DEBUG nova.virt.hardware [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.101 189444 DEBUG nova.virt.libvirt.vif [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:10:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr',id=4,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-a9gcnjo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:10:38Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI0MTMzNzExMzk0MjE2MDc2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec 11 14:10:41 compute-0 nova_compute[189440]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI0MTMzNzExMzk0MjE2MDc2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=125c0574-9fcf-4ecf-9bd8-c4008826d3b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.103 189444 DEBUG nova.network.os_vif_util [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.106 189444 DEBUG nova.network.os_vif_util [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.109 189444 DEBUG nova.objects.instance [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'pci_devices' on Instance uuid 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.134 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <uuid>125c0574-9fcf-4ecf-9bd8-c4008826d3b3</uuid>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <name>instance-00000004</name>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <memory>524288</memory>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:name>vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr</nova:name>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:10:41</nova:creationTime>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:flavor name="m1.small">
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:memory>512</nova:memory>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:ephemeral>1</nova:ephemeral>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:user uuid="26c7a9a5c1c0404bb144cd3cba8ecf9f">admin</nova:user>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:project uuid="9c30b62d3d094e1e8b410a2af9fd7d98">admin</nova:project>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="714a3758-ec97-4149-8cfb-208787ab3704"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        <nova:port uuid="ffab0c4b-81ca-4416-acb2-bf5d1b973fc7">
Dec 11 14:10:41 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="192.168.0.232" ipVersion="4"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <system>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <entry name="serial">125c0574-9fcf-4ecf-9bd8-c4008826d3b3</entry>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <entry name="uuid">125c0574-9fcf-4ecf-9bd8-c4008826d3b3</entry>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </system>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <os>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </os>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <features>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </features>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <target dev="vdb" bus="virtio"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.config"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:64:de:bd"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <target dev="tapffab0c4b-81"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/console.log" append="off"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <video>
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </video>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:10:41 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:10:41 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:10:41 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:10:41 compute-0 nova_compute[189440]: </domain>
Dec 11 14:10:41 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.147 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Preparing to wait for external event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.147 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.147 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.148 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.148 189444 DEBUG nova.virt.libvirt.vif [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:10:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr',id=4,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-a9gcnjo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:10:38Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI0MTMzNzExMzk0MjE2MDc2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.149 189444 DEBUG nova.network.os_vif_util [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.149 189444 DEBUG nova.network.os_vif_util [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.150 189444 DEBUG os_vif [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.152 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.153 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.153 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.157 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.158 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffab0c4b-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.159 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffab0c4b-81, col_values=(('external_ids', {'iface-id': 'ffab0c4b-81ca-4416-acb2-bf5d1b973fc7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:de:bd', 'vm-uuid': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:41 compute-0 NetworkManager[56353]: <info>  [1765462241.1627] manager: (tapffab0c4b-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.165 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.170 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.172 189444 INFO os_vif [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81')#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.289 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.291 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.291 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.291 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No VIF found with MAC fa:16:3e:64:de:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.292 189444 INFO nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Using config drive#033[00m
Dec 11 14:10:41 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:10:41.101 189444 DEBUG nova.virt.libvirt.vif [None req-8839ad25-1f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.581 189444 INFO nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Creating config drive at /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.config#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.590 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdofho14h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:41 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:41.640 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.716 189444 DEBUG oslo_concurrency.processutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdofho14h" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:41 compute-0 kernel: tapffab0c4b-81: entered promiscuous mode
Dec 11 14:10:41 compute-0 NetworkManager[56353]: <info>  [1765462241.8038] manager: (tapffab0c4b-81): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 11 14:10:41 compute-0 ovn_controller[97832]: 2025-12-11T14:10:41Z|00045|binding|INFO|Claiming lport ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 for this chassis.
Dec 11 14:10:41 compute-0 ovn_controller[97832]: 2025-12-11T14:10:41Z|00046|binding|INFO|ffab0c4b-81ca-4416-acb2-bf5d1b973fc7: Claiming fa:16:3e:64:de:bd 192.168.0.232
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.808 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.843 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:41 compute-0 systemd-udevd[243630]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:10:41 compute-0 ovn_controller[97832]: 2025-12-11T14:10:41Z|00047|binding|INFO|Setting lport ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 ovn-installed in OVS
Dec 11 14:10:41 compute-0 nova_compute[189440]: 2025-12-11 14:10:41.848 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:41 compute-0 NetworkManager[56353]: <info>  [1765462241.8625] device (tapffab0c4b-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:10:41 compute-0 NetworkManager[56353]: <info>  [1765462241.8680] device (tapffab0c4b-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:10:41 compute-0 systemd-machined[155778]: New machine qemu-4-instance-00000004.
Dec 11 14:10:41 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.043 189444 DEBUG nova.network.neutron [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updated VIF entry in instance network info cache for port ffab0c4b-81ca-4416-acb2-bf5d1b973fc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.045 189444 DEBUG nova.network.neutron [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.058 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:de:bd 192.168.0.232'], port_security=['fa:16:3e:64:de:bd 192.168.0.232'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5m7msfabwkqt-eaftnsicx5k4-rixmquahxbge-port-zv45recekdib', 'neutron:cidrs': '192.168.0.232/24', 'neutron:device_id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5m7msfabwkqt-eaftnsicx5k4-rixmquahxbge-port-zv45recekdib', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.060 106686 INFO neutron.agent.ovn.metadata.agent [-] Port ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d bound to our chassis#033[00m
Dec 11 14:10:42 compute-0 ovn_controller[97832]: 2025-12-11T14:10:42Z|00048|binding|INFO|Setting lport ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 up in Southbound
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.063 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.080 189444 DEBUG oslo_concurrency.lockutils [req-577cb289-f5b7-4083-bf48-66c8bcd6ace0 req-364218a9-70be-4bda-83cc-e24e509b8819 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.085 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[319205ca-7487-458b-9663-452b35ce09d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.119 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[cf77f8d7-e007-44a4-a677-36c2c0eba71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.122 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[c8036575-2055-4730-8dec-d04eb7fb819b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.149 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7fe11b-6cb5-47e5-b399-2046af405530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.169 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdc0672-f408-4c6e-8511-19d1961f51c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 24776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243645, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.187 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7254e747-9cd7-4cff-8170-9b75cb9759f7]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378129, 'tstamp': 378129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243646, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378131, 'tstamp': 378131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243646, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.189 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.191 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.193 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.193 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.194 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.194 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:10:42 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:10:42.195 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.325 189444 DEBUG nova.compute.manager [req-f8e8501d-6b66-4148-84a9-28b591ce156b req-ea56f4db-30e1-4752-b4c9-8a8005b21723 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.325 189444 DEBUG oslo_concurrency.lockutils [req-f8e8501d-6b66-4148-84a9-28b591ce156b req-ea56f4db-30e1-4752-b4c9-8a8005b21723 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.325 189444 DEBUG oslo_concurrency.lockutils [req-f8e8501d-6b66-4148-84a9-28b591ce156b req-ea56f4db-30e1-4752-b4c9-8a8005b21723 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.326 189444 DEBUG oslo_concurrency.lockutils [req-f8e8501d-6b66-4148-84a9-28b591ce156b req-ea56f4db-30e1-4752-b4c9-8a8005b21723 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.326 189444 DEBUG nova.compute.manager [req-f8e8501d-6b66-4148-84a9-28b591ce156b req-ea56f4db-30e1-4752-b4c9-8a8005b21723 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Processing event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.435 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462242.434366, 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.435 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] VM Started (Lifecycle Event)#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.436 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.443 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.448 189444 INFO nova.virt.libvirt.driver [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Instance spawned successfully.#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.448 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.458 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.464 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.491 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.492 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462242.4344664, 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.492 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.500 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.500 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.501 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.501 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.502 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.502 189444 DEBUG nova.virt.libvirt.driver [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.510 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.517 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462242.4419255, 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.517 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.540 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.546 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.566 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.573 189444 INFO nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Took 3.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.574 189444 DEBUG nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.655 189444 INFO nova.compute.manager [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Took 4.52 seconds to build instance.#033[00m
Dec 11 14:10:42 compute-0 nova_compute[189440]: 2025-12-11 14:10:42.674 189444 DEBUG oslo_concurrency.lockutils [None req-8839ad25-1f9c-4c90-8692-7c45bfd2efe5 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:43 compute-0 podman[243658]: 2025-12-11 14:10:43.534104653 +0000 UTC m=+0.132021667 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 11 14:10:44 compute-0 nova_compute[189440]: 2025-12-11 14:10:44.533 189444 DEBUG nova.compute.manager [req-94a60508-aa9f-43e7-850a-6517e649c5e9 req-e5c94448-9694-4012-bc33-7472eb092baf a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:10:44 compute-0 nova_compute[189440]: 2025-12-11 14:10:44.534 189444 DEBUG oslo_concurrency.lockutils [req-94a60508-aa9f-43e7-850a-6517e649c5e9 req-e5c94448-9694-4012-bc33-7472eb092baf a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:44 compute-0 nova_compute[189440]: 2025-12-11 14:10:44.534 189444 DEBUG oslo_concurrency.lockutils [req-94a60508-aa9f-43e7-850a-6517e649c5e9 req-e5c94448-9694-4012-bc33-7472eb092baf a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:44 compute-0 nova_compute[189440]: 2025-12-11 14:10:44.535 189444 DEBUG oslo_concurrency.lockutils [req-94a60508-aa9f-43e7-850a-6517e649c5e9 req-e5c94448-9694-4012-bc33-7472eb092baf a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:44 compute-0 nova_compute[189440]: 2025-12-11 14:10:44.535 189444 DEBUG nova.compute.manager [req-94a60508-aa9f-43e7-850a-6517e649c5e9 req-e5c94448-9694-4012-bc33-7472eb092baf a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] No waiting events found dispatching network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:10:44 compute-0 nova_compute[189440]: 2025-12-11 14:10:44.536 189444 WARNING nova.compute.manager [req-94a60508-aa9f-43e7-850a-6517e649c5e9 req-e5c94448-9694-4012-bc33-7472eb092baf a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received unexpected event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 for instance with vm_state active and task_state None.#033[00m
Dec 11 14:10:45 compute-0 nova_compute[189440]: 2025-12-11 14:10:45.465 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:46 compute-0 nova_compute[189440]: 2025-12-11 14:10:46.162 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:46 compute-0 podman[243683]: 2025-12-11 14:10:46.482284945 +0000 UTC m=+0.085670664 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Dec 11 14:10:47 compute-0 nova_compute[189440]: 2025-12-11 14:10:47.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:47 compute-0 nova_compute[189440]: 2025-12-11 14:10:47.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:10:48 compute-0 podman[243703]: 2025-12-11 14:10:48.507491094 +0000 UTC m=+0.094052137 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:10:50 compute-0 nova_compute[189440]: 2025-12-11 14:10:50.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:50 compute-0 nova_compute[189440]: 2025-12-11 14:10:50.467 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:51 compute-0 nova_compute[189440]: 2025-12-11 14:10:51.167 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:52 compute-0 nova_compute[189440]: 2025-12-11 14:10:52.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:52 compute-0 nova_compute[189440]: 2025-12-11 14:10:52.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:10:53 compute-0 nova_compute[189440]: 2025-12-11 14:10:53.221 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:10:53 compute-0 nova_compute[189440]: 2025-12-11 14:10:53.222 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:10:53 compute-0 nova_compute[189440]: 2025-12-11 14:10:53.223 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:10:55 compute-0 nova_compute[189440]: 2025-12-11 14:10:55.470 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:55 compute-0 nova_compute[189440]: 2025-12-11 14:10:55.975 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:10:55 compute-0 nova_compute[189440]: 2025-12-11 14:10:55.995 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:10:55 compute-0 nova_compute[189440]: 2025-12-11 14:10:55.996 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:10:55 compute-0 nova_compute[189440]: 2025-12-11 14:10:55.997 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:55 compute-0 nova_compute[189440]: 2025-12-11 14:10:55.997 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.170 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.259 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.287 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.288 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.288 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.288 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.410 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.474 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.475 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.534 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.535 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.634 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.636 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.722 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.729 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.786 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.787 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.847 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.849 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.949 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:56 compute-0 nova_compute[189440]: 2025-12-11 14:10:56.950 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.014 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.024 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.093 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.094 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.157 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.158 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.218 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.220 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.280 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.287 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.346 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.348 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.414 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.416 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.480 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.482 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.547 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.943 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.944 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4775MB free_disk=72.32802200317383GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.945 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:10:57 compute-0 nova_compute[189440]: 2025-12-11 14:10:57.945 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.229 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.229 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.230 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.230 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.231 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.231 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.360 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.374 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.458 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:10:58 compute-0 nova_compute[189440]: 2025-12-11 14:10:58.459 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:10:59 compute-0 podman[203650]: time="2025-12-11T14:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:10:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:10:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Dec 11 14:11:00 compute-0 nova_compute[189440]: 2025-12-11 14:11:00.435 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:00 compute-0 nova_compute[189440]: 2025-12-11 14:11:00.435 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:00 compute-0 nova_compute[189440]: 2025-12-11 14:11:00.473 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:00 compute-0 podman[243775]: 2025-12-11 14:11:00.484106156 +0000 UTC m=+0.081761420 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:11:01 compute-0 nova_compute[189440]: 2025-12-11 14:11:01.174 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: ERROR   14:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: ERROR   14:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: ERROR   14:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: ERROR   14:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: ERROR   14:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:11:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:11:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:11:04.085 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:11:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:11:04.086 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:11:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:11:04.087 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:11:04 compute-0 podman[243798]: 2025-12-11 14:11:04.509977468 +0000 UTC m=+0.106775785 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:11:05 compute-0 nova_compute[189440]: 2025-12-11 14:11:05.477 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:06 compute-0 nova_compute[189440]: 2025-12-11 14:11:06.179 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:06 compute-0 podman[243818]: 2025-12-11 14:11:06.512694721 +0000 UTC m=+0.108072966 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 11 14:11:08 compute-0 podman[243838]: 2025-12-11 14:11:08.521652686 +0000 UTC m=+0.094944689 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, release-0.7.12=, io.openshift.expose-services=)
Dec 11 14:11:08 compute-0 podman[243837]: 2025-12-11 14:11:08.540080602 +0000 UTC m=+0.111557240 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 11 14:11:10 compute-0 nova_compute[189440]: 2025-12-11 14:11:10.479 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:11 compute-0 nova_compute[189440]: 2025-12-11 14:11:11.181 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:11 compute-0 podman[243875]: 2025-12-11 14:11:11.501565177 +0000 UTC m=+0.086835743 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:11:12 compute-0 ovn_controller[97832]: 2025-12-11T14:11:12Z|00049|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec 11 14:11:13 compute-0 ovn_controller[97832]: 2025-12-11T14:11:13Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:de:bd 192.168.0.232
Dec 11 14:11:13 compute-0 ovn_controller[97832]: 2025-12-11T14:11:13Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:de:bd 192.168.0.232
Dec 11 14:11:14 compute-0 podman[243909]: 2025-12-11 14:11:14.569292282 +0000 UTC m=+0.163750022 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 11 14:11:15 compute-0 nova_compute[189440]: 2025-12-11 14:11:15.482 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:16 compute-0 nova_compute[189440]: 2025-12-11 14:11:16.185 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:17 compute-0 podman[243937]: 2025-12-11 14:11:17.493628949 +0000 UTC m=+0.074397541 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec 11 14:11:19 compute-0 podman[243959]: 2025-12-11 14:11:19.512248838 +0000 UTC m=+0.097153703 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:11:20 compute-0 nova_compute[189440]: 2025-12-11 14:11:20.486 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:21 compute-0 nova_compute[189440]: 2025-12-11 14:11:21.189 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:25 compute-0 nova_compute[189440]: 2025-12-11 14:11:25.492 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:26 compute-0 nova_compute[189440]: 2025-12-11 14:11:26.193 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:29 compute-0 podman[203650]: time="2025-12-11T14:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:11:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:11:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec 11 14:11:30 compute-0 nova_compute[189440]: 2025-12-11 14:11:30.494 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:31 compute-0 nova_compute[189440]: 2025-12-11 14:11:31.197 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: ERROR   14:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: ERROR   14:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: ERROR   14:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: ERROR   14:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: ERROR   14:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:11:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:11:31 compute-0 podman[243982]: 2025-12-11 14:11:31.529704739 +0000 UTC m=+0.115267701 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:11:35 compute-0 podman[244005]: 2025-12-11 14:11:35.482938113 +0000 UTC m=+0.084610218 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 11 14:11:35 compute-0 nova_compute[189440]: 2025-12-11 14:11:35.496 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:36 compute-0 nova_compute[189440]: 2025-12-11 14:11:36.201 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:37 compute-0 podman[244024]: 2025-12-11 14:11:37.493832904 +0000 UTC m=+0.088896261 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:11:39 compute-0 podman[244044]: 2025-12-11 14:11:39.552451894 +0000 UTC m=+0.131282941 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 11 14:11:39 compute-0 podman[244045]: 2025-12-11 14:11:39.564807717 +0000 UTC m=+0.134567904 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public)
Dec 11 14:11:40 compute-0 nova_compute[189440]: 2025-12-11 14:11:40.498 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:41 compute-0 nova_compute[189440]: 2025-12-11 14:11:41.204 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:42 compute-0 podman[244083]: 2025-12-11 14:11:42.517757875 +0000 UTC m=+0.106385057 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2)
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.985 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.986 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.000 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '081c0041-e68f-4fa9-8c7b-7139d21acf6b', 'name': 'vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.003 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.004 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/125c0574-9fcf-4ecf-9bd8-c4008826d3b3 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.714 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Thu, 11 Dec 2025 14:11:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ad6c96a3-6d4d-4118-9d9a-2ba786520244 x-openstack-request-id: req-ad6c96a3-6d4d-4118-9d9a-2ba786520244 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.714 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "125c0574-9fcf-4ecf-9bd8-c4008826d3b3", "name": "vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr", "status": "ACTIVE", "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "user_id": "26c7a9a5c1c0404bb144cd3cba8ecf9f", "metadata": {"metering.server_group": "f7b42205-1b4f-49eb-9f02-9c04957c72b4"}, "hostId": "8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3", "image": {"id": "714a3758-ec97-4149-8cfb-208787ab3704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/714a3758-ec97-4149-8cfb-208787ab3704"}]}, "flavor": {"id": "1d6c0fe6-4c75-4860-b5c4-bc55bee577e2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/1d6c0fe6-4c75-4860-b5c4-bc55bee577e2"}]}, "created": "2025-12-11T14:10:35Z", "updated": "2025-12-11T14:10:42Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.232", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:64:de:bd"}, {"version": 4, "addr": "192.168.122.210", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:64:de:bd"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/125c0574-9fcf-4ecf-9bd8-c4008826d3b3"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/125c0574-9fcf-4ecf-9bd8-c4008826d3b3"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-11T14:10:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.714 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/125c0574-9fcf-4ecf-9bd8-c4008826d3b3 used request id req-ad6c96a3-6d4d-4118-9d9a-2ba786520244 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.715 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'name': 'vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.719 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.722 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'name': 'vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:11:43.723331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.728 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.732 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 / tapffab0c4b-81 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.733 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.737 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.741 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes volume: 7172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.742 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:11:43.743100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.774 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/cpu volume: 34930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.803 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/cpu volume: 31020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.827 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 42270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.853 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/cpu volume: 383790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:11:43.854813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.879 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.880 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.880 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.905 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.905 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.906 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.932 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.932 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.933 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.958 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.959 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.959 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.962 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes.delta volume: 210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.962 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:11:43.961450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.962 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.963 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes.delta volume: 2540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.963 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:11:43.964168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.964 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.965 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/memory.usage volume: 49.6015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.965 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.965 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/memory.usage volume: 48.96484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.967 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.972 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:11:43.969908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.971 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.973 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.976 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.979 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.982 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.984 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-11T14:11:43.985035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.985 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.985 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr>]
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.986 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:11:43.986612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.987 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.987 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.987 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.988 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.988 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:11:43.989210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.989 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.990 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.990 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.990 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.991 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:11:43.991831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.992 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.992 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.992 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.993 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.993 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:11:43.994288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.994 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.995 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.995 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.995 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.995 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.996 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.996 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.996 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.997 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.997 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.997 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.997 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:43.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:11:43.999172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.078 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 500931517 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.078 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 79030432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.079 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 61428410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.145 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 406025219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.146 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 74406979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.146 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 55584693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.212 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.212 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.213 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.297 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 386530042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.298 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 87643374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.299 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 69768051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.301 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.302 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:11:44.301884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.303 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.304 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.304 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.305 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.305 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.306 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.306 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.307 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.307 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.308 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.308 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:11:44.310700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.311 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.312 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.312 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.313 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.313 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.314 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.314 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.315 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.315 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.316 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.316 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.316 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:11:44.319392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.320 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.321 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.321 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.322 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 41705472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.322 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.323 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.323 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.324 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.325 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.325 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 41844736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.326 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.326 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.328 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 1759291958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.329 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 10306999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.329 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:11:44.328417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.330 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 1471200153 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.330 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 9758476 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.330 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.331 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.331 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.331 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.331 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 7712445792 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.332 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 207693799 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.332 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.333 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.334 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:11:44.333950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.334 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.335 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.335 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.336 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:11:44.336360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.337 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.337 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.337 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.338 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.338 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.338 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.338 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.339 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.339 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.339 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.339 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.341 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.341 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.342 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.342 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:11:44.341287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.343 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-11T14:11:44.343957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.344 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.344 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr>]
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:11:44.345615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.348 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.348 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.349 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:11:44.348110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.350 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.351 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.353 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.354 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.354 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.355 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.355 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:11:44.351635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:11:44.353964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.357 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.357 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.357 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:11:44.357024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.358 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.359 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:11:44.359579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.360 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.360 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.360 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.360 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.361 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.361 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.361 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.362 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.362 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.362 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.362 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:11:44.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:11:44 compute-0 podman[244104]: 2025-12-11 14:11:44.863140652 +0000 UTC m=+0.153647765 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:11:45 compute-0 nova_compute[189440]: 2025-12-11 14:11:45.502 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:46 compute-0 nova_compute[189440]: 2025-12-11 14:11:46.209 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:48 compute-0 nova_compute[189440]: 2025-12-11 14:11:48.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:48 compute-0 nova_compute[189440]: 2025-12-11 14:11:48.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:11:48 compute-0 podman[244130]: 2025-12-11 14:11:48.545865843 +0000 UTC m=+0.126817077 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 11 14:11:50 compute-0 nova_compute[189440]: 2025-12-11 14:11:50.505 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:50 compute-0 podman[244151]: 2025-12-11 14:11:50.519445013 +0000 UTC m=+0.112893116 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:11:51 compute-0 nova_compute[189440]: 2025-12-11 14:11:51.211 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:52 compute-0 nova_compute[189440]: 2025-12-11 14:11:52.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:53 compute-0 nova_compute[189440]: 2025-12-11 14:11:53.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:54 compute-0 nova_compute[189440]: 2025-12-11 14:11:54.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:54 compute-0 nova_compute[189440]: 2025-12-11 14:11:54.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:11:55 compute-0 nova_compute[189440]: 2025-12-11 14:11:55.289 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:11:55 compute-0 nova_compute[189440]: 2025-12-11 14:11:55.290 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:11:55 compute-0 nova_compute[189440]: 2025-12-11 14:11:55.291 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:11:55 compute-0 nova_compute[189440]: 2025-12-11 14:11:55.510 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:56 compute-0 nova_compute[189440]: 2025-12-11 14:11:56.215 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.317 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updating instance_info_cache with network_info: [{"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.556 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.557 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.559 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.559 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.601 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.602 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.603 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.604 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.738 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.824 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.825 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.908 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.909 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.988 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:57 compute-0 nova_compute[189440]: 2025-12-11 14:11:57.989 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.067 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.076 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.147 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.149 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.246 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.250 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.349 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.351 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.415 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.431 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.511 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.513 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.595 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.596 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.657 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.659 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.750 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.762 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.833 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.835 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.901 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:58 compute-0 nova_compute[189440]: 2025-12-11 14:11:58.904 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.001 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.004 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.101 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.582 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.584 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4657MB free_disk=72.30702590942383GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.584 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.584 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.675 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.676 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.676 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.677 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.677 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.677 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:11:59 compute-0 podman[203650]: time="2025-12-11T14:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:11:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:11:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec 11 14:11:59 compute-0 nova_compute[189440]: 2025-12-11 14:11:59.803 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:12:00 compute-0 nova_compute[189440]: 2025-12-11 14:12:00.258 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:12:00 compute-0 nova_compute[189440]: 2025-12-11 14:12:00.262 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:12:00 compute-0 nova_compute[189440]: 2025-12-11 14:12:00.263 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:00 compute-0 nova_compute[189440]: 2025-12-11 14:12:00.512 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:01 compute-0 nova_compute[189440]: 2025-12-11 14:12:01.218 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: ERROR   14:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: ERROR   14:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: ERROR   14:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: ERROR   14:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: ERROR   14:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:12:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:12:01 compute-0 nova_compute[189440]: 2025-12-11 14:12:01.940 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:01 compute-0 nova_compute[189440]: 2025-12-11 14:12:01.941 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:01 compute-0 nova_compute[189440]: 2025-12-11 14:12:01.941 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:02 compute-0 podman[244224]: 2025-12-11 14:12:02.483271661 +0000 UTC m=+0.076042074 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:12:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:12:04.086 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:12:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:12:04.086 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:12:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:12:04.087 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:05 compute-0 nova_compute[189440]: 2025-12-11 14:12:05.515 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:06 compute-0 nova_compute[189440]: 2025-12-11 14:12:06.221 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:06 compute-0 podman[244248]: 2025-12-11 14:12:06.502904785 +0000 UTC m=+0.084838478 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 11 14:12:08 compute-0 podman[244265]: 2025-12-11 14:12:08.481023408 +0000 UTC m=+0.079127730 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 11 14:12:10 compute-0 podman[244284]: 2025-12-11 14:12:10.47514202 +0000 UTC m=+0.076382751 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:12:10 compute-0 podman[244285]: 2025-12-11 14:12:10.487445612 +0000 UTC m=+0.087851283 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Dec 11 14:12:10 compute-0 nova_compute[189440]: 2025-12-11 14:12:10.518 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:11 compute-0 nova_compute[189440]: 2025-12-11 14:12:11.224 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:13 compute-0 podman[244319]: 2025-12-11 14:12:13.579238112 +0000 UTC m=+0.151828679 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 11 14:12:15 compute-0 nova_compute[189440]: 2025-12-11 14:12:15.520 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:15 compute-0 podman[244340]: 2025-12-11 14:12:15.589110412 +0000 UTC m=+0.177982880 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 11 14:12:16 compute-0 nova_compute[189440]: 2025-12-11 14:12:16.229 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:19 compute-0 podman[244365]: 2025-12-11 14:12:19.528128922 +0000 UTC m=+0.116834412 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350)
Dec 11 14:12:20 compute-0 nova_compute[189440]: 2025-12-11 14:12:20.523 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:21 compute-0 nova_compute[189440]: 2025-12-11 14:12:21.233 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:21 compute-0 podman[244386]: 2025-12-11 14:12:21.494001973 +0000 UTC m=+0.093032529 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:12:25 compute-0 nova_compute[189440]: 2025-12-11 14:12:25.526 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:26 compute-0 nova_compute[189440]: 2025-12-11 14:12:26.236 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:29 compute-0 podman[203650]: time="2025-12-11T14:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:12:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:12:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec 11 14:12:30 compute-0 nova_compute[189440]: 2025-12-11 14:12:30.528 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:31 compute-0 nova_compute[189440]: 2025-12-11 14:12:31.240 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: ERROR   14:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: ERROR   14:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: ERROR   14:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: ERROR   14:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: ERROR   14:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:12:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:12:33 compute-0 podman[244411]: 2025-12-11 14:12:33.525399883 +0000 UTC m=+0.117340206 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:12:35 compute-0 nova_compute[189440]: 2025-12-11 14:12:35.534 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:36 compute-0 nova_compute[189440]: 2025-12-11 14:12:36.244 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:37 compute-0 podman[244434]: 2025-12-11 14:12:37.53491903 +0000 UTC m=+0.127799172 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 11 14:12:39 compute-0 podman[244453]: 2025-12-11 14:12:39.536133626 +0000 UTC m=+0.117195001 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 11 14:12:40 compute-0 nova_compute[189440]: 2025-12-11 14:12:40.538 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:41 compute-0 nova_compute[189440]: 2025-12-11 14:12:41.247 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:41 compute-0 podman[244473]: 2025-12-11 14:12:41.479915655 +0000 UTC m=+0.075333116 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:12:41 compute-0 podman[244474]: 2025-12-11 14:12:41.514732968 +0000 UTC m=+0.106882838 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 11 14:12:43 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 11 14:12:43 compute-0 podman[244514]: 2025-12-11 14:12:43.907168189 +0000 UTC m=+0.123636120 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:12:45 compute-0 nova_compute[189440]: 2025-12-11 14:12:45.542 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:46 compute-0 nova_compute[189440]: 2025-12-11 14:12:46.251 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:46 compute-0 podman[244534]: 2025-12-11 14:12:46.536328285 +0000 UTC m=+0.127128315 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:12:50 compute-0 nova_compute[189440]: 2025-12-11 14:12:50.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:50 compute-0 nova_compute[189440]: 2025-12-11 14:12:50.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:12:50 compute-0 nova_compute[189440]: 2025-12-11 14:12:50.546 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:50 compute-0 podman[244560]: 2025-12-11 14:12:50.549149226 +0000 UTC m=+0.132199320 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.254 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.260 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.407 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.448 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid 82437023-b24d-48bf-af1c-d1957df4da67 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.449 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.450 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid 081c0041-e68f-4fa9-8c7b-7139d21acf6b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.451 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.452 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.453 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "82437023-b24d-48bf-af1c-d1957df4da67" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.454 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.457 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.458 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.459 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.461 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.462 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.542 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "82437023-b24d-48bf-af1c-d1957df4da67" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.556 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.575 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:51 compute-0 nova_compute[189440]: 2025-12-11 14:12:51.576 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:52 compute-0 podman[244581]: 2025-12-11 14:12:52.600920721 +0000 UTC m=+0.174885845 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:12:54 compute-0 nova_compute[189440]: 2025-12-11 14:12:54.290 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:54 compute-0 nova_compute[189440]: 2025-12-11 14:12:54.290 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:12:54 compute-0 nova_compute[189440]: 2025-12-11 14:12:54.291 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:12:55 compute-0 nova_compute[189440]: 2025-12-11 14:12:55.310 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:12:55 compute-0 nova_compute[189440]: 2025-12-11 14:12:55.311 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:12:55 compute-0 nova_compute[189440]: 2025-12-11 14:12:55.311 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:12:55 compute-0 nova_compute[189440]: 2025-12-11 14:12:55.312 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:12:55 compute-0 nova_compute[189440]: 2025-12-11 14:12:55.549 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:56 compute-0 nova_compute[189440]: 2025-12-11 14:12:56.258 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:12:57 compute-0 nova_compute[189440]: 2025-12-11 14:12:57.048 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:12:57 compute-0 nova_compute[189440]: 2025-12-11 14:12:57.064 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:12:57 compute-0 nova_compute[189440]: 2025-12-11 14:12:57.065 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:12:57 compute-0 nova_compute[189440]: 2025-12-11 14:12:57.066 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:57 compute-0 nova_compute[189440]: 2025-12-11 14:12:57.066 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:57 compute-0 nova_compute[189440]: 2025-12-11 14:12:57.066 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.269 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.270 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.270 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.271 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.360 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.442 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.444 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.542 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.544 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.635 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.637 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.729 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.737 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.808 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.811 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.898 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.900 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.971 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:58 compute-0 nova_compute[189440]: 2025-12-11 14:12:58.973 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.075 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.088 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.174 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.176 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.233 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.235 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.328 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.329 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.384 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.393 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.457 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.458 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.519 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.521 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.601 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.602 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:12:59 compute-0 nova_compute[189440]: 2025-12-11 14:12:59.661 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:12:59 compute-0 podman[203650]: time="2025-12-11T14:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:12:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:12:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.188 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.189 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4656MB free_disk=72.30702209472656GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.190 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.190 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.282 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.282 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.282 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.282 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.283 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.283 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.307 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.324 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.324 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.338 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.367 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.458 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.553 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.692 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.695 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:13:00 compute-0 nova_compute[189440]: 2025-12-11 14:13:00.695 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:13:01 compute-0 nova_compute[189440]: 2025-12-11 14:13:01.262 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: ERROR   14:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: ERROR   14:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: ERROR   14:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: ERROR   14:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: ERROR   14:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:13:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:13:02 compute-0 nova_compute[189440]: 2025-12-11 14:13:02.691 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:02 compute-0 nova_compute[189440]: 2025-12-11 14:13:02.717 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:02 compute-0 nova_compute[189440]: 2025-12-11 14:13:02.717 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:13:04.088 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:13:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:13:04.089 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:13:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:13:04.090 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:13:04 compute-0 podman[244652]: 2025-12-11 14:13:04.51905186 +0000 UTC m=+0.104282115 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:13:05 compute-0 nova_compute[189440]: 2025-12-11 14:13:05.555 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:06 compute-0 nova_compute[189440]: 2025-12-11 14:13:06.266 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:07 compute-0 nova_compute[189440]: 2025-12-11 14:13:07.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:07 compute-0 nova_compute[189440]: 2025-12-11 14:13:07.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:13:07 compute-0 nova_compute[189440]: 2025-12-11 14:13:07.252 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:08 compute-0 podman[244677]: 2025-12-11 14:13:08.541518794 +0000 UTC m=+0.132923816 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 14:13:10 compute-0 podman[244696]: 2025-12-11 14:13:10.531647449 +0000 UTC m=+0.109592035 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 11 14:13:10 compute-0 nova_compute[189440]: 2025-12-11 14:13:10.559 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:11 compute-0 nova_compute[189440]: 2025-12-11 14:13:11.268 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:12 compute-0 podman[244716]: 2025-12-11 14:13:12.520298249 +0000 UTC m=+0.097193193 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Dec 11 14:13:12 compute-0 podman[244715]: 2025-12-11 14:13:12.53544097 +0000 UTC m=+0.115551492 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:13:14 compute-0 podman[244754]: 2025-12-11 14:13:14.570428453 +0000 UTC m=+0.158421641 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 14:13:15 compute-0 nova_compute[189440]: 2025-12-11 14:13:15.562 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:16 compute-0 nova_compute[189440]: 2025-12-11 14:13:16.272 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:17 compute-0 podman[244773]: 2025-12-11 14:13:17.531236503 +0000 UTC m=+0.130672182 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:13:20 compute-0 nova_compute[189440]: 2025-12-11 14:13:20.565 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:21 compute-0 nova_compute[189440]: 2025-12-11 14:13:21.276 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:21 compute-0 podman[244799]: 2025-12-11 14:13:21.549474213 +0000 UTC m=+0.126070059 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 11 14:13:23 compute-0 podman[244819]: 2025-12-11 14:13:23.483533525 +0000 UTC m=+0.080811040 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:13:25 compute-0 nova_compute[189440]: 2025-12-11 14:13:25.567 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:26 compute-0 nova_compute[189440]: 2025-12-11 14:13:26.280 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:29 compute-0 podman[203650]: time="2025-12-11T14:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:13:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:13:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec 11 14:13:30 compute-0 nova_compute[189440]: 2025-12-11 14:13:30.571 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:31 compute-0 nova_compute[189440]: 2025-12-11 14:13:31.284 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: ERROR   14:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: ERROR   14:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: ERROR   14:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: ERROR   14:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: ERROR   14:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:13:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:13:35 compute-0 podman[244845]: 2025-12-11 14:13:35.496849491 +0000 UTC m=+0.090866437 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:13:35 compute-0 nova_compute[189440]: 2025-12-11 14:13:35.572 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:36 compute-0 nova_compute[189440]: 2025-12-11 14:13:36.287 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:39 compute-0 podman[244867]: 2025-12-11 14:13:39.513166155 +0000 UTC m=+0.097608661 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 11 14:13:40 compute-0 nova_compute[189440]: 2025-12-11 14:13:40.575 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:41 compute-0 nova_compute[189440]: 2025-12-11 14:13:41.290 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:41 compute-0 podman[244885]: 2025-12-11 14:13:41.53426528 +0000 UTC m=+0.109735740 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.984 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.985 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.996 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '081c0041-e68f-4fa9-8c7b-7139d21acf6b', 'name': 'vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'name': 'vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.009 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.014 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'name': 'vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:13:43.015580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.022 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.028 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.036 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.043 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes volume: 7242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.044 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:13:43.045311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.085 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/cpu volume: 36650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.122 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/cpu volume: 32750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.166 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 44040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.207 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/cpu volume: 385500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:13:43.209208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.252 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.253 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.253 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.307 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.308 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.309 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.353 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.354 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.354 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.401 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.402 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.403 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.405 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.406 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.407 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:13:43.405480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.407 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.409 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.409 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.410 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.410 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.411 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/memory.usage volume: 48.96484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:13:43.409476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.412 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.413 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:13:43.413352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.414 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.414 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.415 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.417 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.417 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.418 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.419 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:13:43.418173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.420 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.420 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets volume: 61 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.422 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.423 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.423 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.424 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:13:43.422351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.426 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.427 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.427 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.428 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.429 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:13:43.426166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.430 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.430 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.431 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.431 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.431 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.432 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.432 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.432 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.432 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.433 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.433 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.433 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:13:43.430217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:13:43.435234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.537 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 500931517 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.537 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 79030432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.544 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 61428410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 podman[244905]: 2025-12-11 14:13:43.547661744 +0000 UTC m=+0.128804216 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm)
Dec 11 14:13:43 compute-0 podman[244904]: 2025-12-11 14:13:43.550191236 +0000 UTC m=+0.133127942 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.621 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 406025219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.621 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 74406979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.622 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 55584693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.711 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.711 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.712 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.802 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 386530042 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.803 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 87643374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.804 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.latency volume: 69768051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.805 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.806 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.806 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.807 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.807 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.808 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.809 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.809 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.809 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.810 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.810 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:13:43.806103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.811 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.812 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.813 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.813 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.814 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.814 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.814 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.815 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.815 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:13:43.814096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.816 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.817 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.817 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.817 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.818 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.818 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.819 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.819 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.821 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.821 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.821 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.821 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.822 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.822 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:13:43.821426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.823 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 41791488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.824 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.824 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.824 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.825 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.825 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.825 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 41844736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.826 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.826 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:13:43.829089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.829 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 1759291958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.830 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 10306999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.830 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.830 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 1481953607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.831 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 9758476 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.831 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.832 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.832 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.832 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.833 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 7712445792 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.833 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 207693799 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.834 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.835 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.836 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.836 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.837 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.837 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:13:43.836126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.839 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.839 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:13:43.839419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.840 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.840 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.840 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.840 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.841 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.841 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.841 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.841 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.842 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.842 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.842 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.843 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.844 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:13:43.843869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.844 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.844 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.845 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.846 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:13:43.846329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.848 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.848 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:13:43.847985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.849 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.849 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:13:43.850544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.851 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.852 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:13:43.852196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.852 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.853 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.853 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.854 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:13:43.854379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.855 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.855 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.856 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.857 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:13:43.856585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.857 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.857 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.858 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.858 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.858 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.858 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.859 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.859 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.859 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.859 14 DEBUG ceilometer.compute.pollsters [-] 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:13:43.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:13:44 compute-0 podman[244940]: 2025-12-11 14:13:44.798942273 +0000 UTC m=+0.097519260 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2)
Dec 11 14:13:45 compute-0 nova_compute[189440]: 2025-12-11 14:13:45.578 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:46 compute-0 nova_compute[189440]: 2025-12-11 14:13:46.292 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:48 compute-0 podman[244960]: 2025-12-11 14:13:48.601958542 +0000 UTC m=+0.184377148 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 11 14:13:50 compute-0 nova_compute[189440]: 2025-12-11 14:13:50.267 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:50 compute-0 nova_compute[189440]: 2025-12-11 14:13:50.267 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:13:50 compute-0 nova_compute[189440]: 2025-12-11 14:13:50.581 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:51 compute-0 nova_compute[189440]: 2025-12-11 14:13:51.295 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:52 compute-0 podman[244986]: 2025-12-11 14:13:52.509225363 +0000 UTC m=+0.099500918 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 11 14:13:54 compute-0 podman[245006]: 2025-12-11 14:13:54.522531246 +0000 UTC m=+0.114802973 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:13:55 compute-0 nova_compute[189440]: 2025-12-11 14:13:55.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:55 compute-0 nova_compute[189440]: 2025-12-11 14:13:55.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:13:55 compute-0 nova_compute[189440]: 2025-12-11 14:13:55.586 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:55 compute-0 nova_compute[189440]: 2025-12-11 14:13:55.921 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:13:55 compute-0 nova_compute[189440]: 2025-12-11 14:13:55.922 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:13:55 compute-0 nova_compute[189440]: 2025-12-11 14:13:55.922 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:13:56 compute-0 nova_compute[189440]: 2025-12-11 14:13:56.298 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:13:57 compute-0 nova_compute[189440]: 2025-12-11 14:13:57.545 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [{"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:13:57 compute-0 nova_compute[189440]: 2025-12-11 14:13:57.559 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:13:57 compute-0 nova_compute[189440]: 2025-12-11 14:13:57.559 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:13:57 compute-0 nova_compute[189440]: 2025-12-11 14:13:57.560 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:57 compute-0 nova_compute[189440]: 2025-12-11 14:13:57.561 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:57 compute-0 nova_compute[189440]: 2025-12-11 14:13:57.561 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:13:59 compute-0 podman[203650]: time="2025-12-11T14:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:13:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:13:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.337 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.337 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.338 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.339 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.511 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.588 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.606 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.607 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.699 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.702 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.806 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.808 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.905 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:00 compute-0 nova_compute[189440]: 2025-12-11 14:14:00.916 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.010 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.011 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.107 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.110 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.203 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.204 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.302 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.306 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.319 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: ERROR   14:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: ERROR   14:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: ERROR   14:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: ERROR   14:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: ERROR   14:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:14:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.424 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.428 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.510 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.512 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.616 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.617 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.701 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.713 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.809 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.810 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.896 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.898 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.984 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:01 compute-0 nova_compute[189440]: 2025-12-11 14:14:01.985 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.050 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.507 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.508 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4626MB free_disk=72.30709838867188GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.509 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.509 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.774 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.775 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.775 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.775 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.775 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:14:02 compute-0 nova_compute[189440]: 2025-12-11 14:14:02.775 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:14:03 compute-0 nova_compute[189440]: 2025-12-11 14:14:03.008 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:14:03 compute-0 nova_compute[189440]: 2025-12-11 14:14:03.103 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:14:03 compute-0 nova_compute[189440]: 2025-12-11 14:14:03.105 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:14:03 compute-0 nova_compute[189440]: 2025-12-11 14:14:03.106 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:04.090 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:04.090 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:04.091 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:04 compute-0 nova_compute[189440]: 2025-12-11 14:14:04.105 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:04 compute-0 nova_compute[189440]: 2025-12-11 14:14:04.106 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:05 compute-0 nova_compute[189440]: 2025-12-11 14:14:05.592 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:06 compute-0 nova_compute[189440]: 2025-12-11 14:14:06.307 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:06 compute-0 podman[245078]: 2025-12-11 14:14:06.490765221 +0000 UTC m=+0.134347812 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:14:09 compute-0 podman[245103]: 2025-12-11 14:14:09.891961086 +0000 UTC m=+0.166514320 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 11 14:14:10 compute-0 nova_compute[189440]: 2025-12-11 14:14:10.594 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:11 compute-0 nova_compute[189440]: 2025-12-11 14:14:11.309 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:12 compute-0 podman[245121]: 2025-12-11 14:14:12.508132675 +0000 UTC m=+0.092087197 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm)
Dec 11 14:14:14 compute-0 podman[245139]: 2025-12-11 14:14:14.526254547 +0000 UTC m=+0.106032668 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 14:14:14 compute-0 podman[245140]: 2025-12-11 14:14:14.567118338 +0000 UTC m=+0.139677062 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=ubi9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, maintainer=Red Hat, Inc.)
Dec 11 14:14:15 compute-0 podman[245177]: 2025-12-11 14:14:15.528450426 +0000 UTC m=+0.126296364 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true)
Dec 11 14:14:15 compute-0 nova_compute[189440]: 2025-12-11 14:14:15.600 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:16 compute-0 nova_compute[189440]: 2025-12-11 14:14:16.310 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:19 compute-0 podman[245197]: 2025-12-11 14:14:19.575584726 +0000 UTC m=+0.171842730 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 11 14:14:20 compute-0 nova_compute[189440]: 2025-12-11 14:14:20.604 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:21 compute-0 nova_compute[189440]: 2025-12-11 14:14:21.314 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:23 compute-0 podman[245223]: 2025-12-11 14:14:23.501192398 +0000 UTC m=+0.086454689 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec 11 14:14:25 compute-0 podman[245241]: 2025-12-11 14:14:25.483364419 +0000 UTC m=+0.078247418 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:14:25 compute-0 nova_compute[189440]: 2025-12-11 14:14:25.606 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.022 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.023 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.024 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.024 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.025 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.027 189444 INFO nova.compute.manager [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Terminating instance#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.028 189444 DEBUG nova.compute.manager [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:14:26 compute-0 kernel: tapf5b2dabe-ea (unregistering): left promiscuous mode
Dec 11 14:14:26 compute-0 NetworkManager[56353]: <info>  [1765462466.0856] device (tapf5b2dabe-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:14:26 compute-0 ovn_controller[97832]: 2025-12-11T14:14:26Z|00050|binding|INFO|Releasing lport f5b2dabe-ea06-4461-8450-3d306c4cd300 from this chassis (sb_readonly=0)
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.104 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 ovn_controller[97832]: 2025-12-11T14:14:26Z|00051|binding|INFO|Setting lport f5b2dabe-ea06-4461-8450-3d306c4cd300 down in Southbound
Dec 11 14:14:26 compute-0 ovn_controller[97832]: 2025-12-11T14:14:26Z|00052|binding|INFO|Removing iface tapf5b2dabe-ea ovn-installed in OVS
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.110 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.115 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:f0:71 192.168.0.184'], port_security=['fa:16:3e:fb:f0:71 192.168.0.184'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5m7msfabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-port-sdeey5zmszca', 'neutron:cidrs': '192.168.0.184/24', 'neutron:device_id': '98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5m7msfabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-port-sdeey5zmszca', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=f5b2dabe-ea06-4461-8450-3d306c4cd300) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.118 106686 INFO neutron.agent.ovn.metadata.agent [-] Port f5b2dabe-ea06-4461-8450-3d306c4cd300 in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d unbound from our chassis#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.120 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.138 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.147 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[477fcf50-5a54-46d3-8e26-cdaf0e38c69e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:14:26 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 11 14:14:26 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 43.032s CPU time.
Dec 11 14:14:26 compute-0 systemd-machined[155778]: Machine qemu-2-instance-00000002 terminated.
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.192 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6da1e7-a4de-421e-b17c-4ac1bdc704bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.196 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[d276a398-cfda-4c67-9c7a-8f8600bca3c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.233 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd9f5e0-6691-415a-a308-61dceeeda17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.259 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[05d56dce-01a8-49c5-80be-c19e41f8d068]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 34655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245276, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.267 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.275 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.287 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[233ea99d-70f5-4bb5-8bb6-5697345af8ac]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378129, 'tstamp': 378129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245282, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378131, 'tstamp': 378131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245282, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.289 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.290 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.297 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.297 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.297 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.298 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.298 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.317 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.335 189444 INFO nova.virt.libvirt.driver [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance destroyed successfully.#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.335 189444 DEBUG nova.objects.instance [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'resources' on Instance uuid 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.433 189444 DEBUG nova.virt.libvirt.vif [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:03:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-zebnbeb4nqd3-mbtttzo2k3ml-vnf-patwmoferzma',id=2,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:03:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-accqusqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:03:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTE3MTIyMjk2MjMzNzA5NDMxND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec 11 14:14:26 compute-0 nova_compute[189440]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTE3MTIyMjk2MjMzNzA5NDMxND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTExNzEyMjI5NjIzMzcwOTQzMTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMTcxMjIyOTYyMzM3MDk0MzE0PT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.433 189444 DEBUG nova.network.os_vif_util [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "address": "fa:16:3e:fb:f0:71", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.184", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5b2dabe-ea", "ovs_interfaceid": "f5b2dabe-ea06-4461-8450-3d306c4cd300", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.434 189444 DEBUG nova.network.os_vif_util [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.435 189444 DEBUG os_vif [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.437 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.437 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5b2dabe-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.439 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.442 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.444 189444 INFO os_vif [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:f0:71,bridge_name='br-int',has_traffic_filtering=True,id=f5b2dabe-ea06-4461-8450-3d306c4cd300,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf5b2dabe-ea')#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.445 189444 INFO nova.virt.libvirt.driver [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Deleting instance files /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2_del#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.446 189444 INFO nova.virt.libvirt.driver [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Deletion of /var/lib/nova/instances/98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2_del complete#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.532 189444 DEBUG nova.virt.libvirt.host [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.533 189444 INFO nova.virt.libvirt.host [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] UEFI support detected#033[00m
Dec 11 14:14:26 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:14:26.433 189444 DEBUG nova.virt.libvirt.vif [None req-0d21aede-83 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.540 189444 DEBUG nova.compute.manager [req-6d50f896-79d0-4415-b192-e1138f042334 req-2d90cf9e-a6b3-4d54-a1f2-146d2f513c7f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-vif-unplugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.541 189444 DEBUG oslo_concurrency.lockutils [req-6d50f896-79d0-4415-b192-e1138f042334 req-2d90cf9e-a6b3-4d54-a1f2-146d2f513c7f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.542 189444 DEBUG oslo_concurrency.lockutils [req-6d50f896-79d0-4415-b192-e1138f042334 req-2d90cf9e-a6b3-4d54-a1f2-146d2f513c7f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.542 189444 DEBUG oslo_concurrency.lockutils [req-6d50f896-79d0-4415-b192-e1138f042334 req-2d90cf9e-a6b3-4d54-a1f2-146d2f513c7f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.543 189444 DEBUG nova.compute.manager [req-6d50f896-79d0-4415-b192-e1138f042334 req-2d90cf9e-a6b3-4d54-a1f2-146d2f513c7f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] No waiting events found dispatching network-vif-unplugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.543 189444 DEBUG nova.compute.manager [req-6d50f896-79d0-4415-b192-e1138f042334 req-2d90cf9e-a6b3-4d54-a1f2-146d2f513c7f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-vif-unplugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.545 189444 INFO nova.compute.manager [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.546 189444 DEBUG oslo.service.loopingcall [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.546 189444 DEBUG nova.compute.manager [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.547 189444 DEBUG nova.network.neutron [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.958 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:14:26 compute-0 nova_compute[189440]: 2025-12-11 14:14:26.959 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:26.960 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:14:27 compute-0 nova_compute[189440]: 2025-12-11 14:14:27.942 189444 DEBUG nova.network.neutron [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:14:27 compute-0 nova_compute[189440]: 2025-12-11 14:14:27.962 189444 INFO nova.compute.manager [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Took 1.42 seconds to deallocate network for instance.#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.011 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.012 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.178 189444 DEBUG nova.compute.provider_tree [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.195 189444 DEBUG nova.scheduler.client.report [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.222 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.261 189444 INFO nova.scheduler.client.report [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Deleted allocations for instance 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.340 189444 DEBUG oslo_concurrency.lockutils [None req-0d21aede-8335-4f71-a765-d5a793a4641c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.660 189444 DEBUG nova.compute.manager [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.661 189444 DEBUG oslo_concurrency.lockutils [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.662 189444 DEBUG oslo_concurrency.lockutils [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.663 189444 DEBUG oslo_concurrency.lockutils [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.663 189444 DEBUG nova.compute.manager [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] No waiting events found dispatching network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.664 189444 WARNING nova.compute.manager [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received unexpected event network-vif-plugged-f5b2dabe-ea06-4461-8450-3d306c4cd300 for instance with vm_state deleted and task_state None.#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.665 189444 DEBUG nova.compute.manager [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Received event network-changed-f5b2dabe-ea06-4461-8450-3d306c4cd300 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.666 189444 DEBUG nova.compute.manager [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Refreshing instance network info cache due to event network-changed-f5b2dabe-ea06-4461-8450-3d306c4cd300. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.667 189444 DEBUG oslo_concurrency.lockutils [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.668 189444 DEBUG oslo_concurrency.lockutils [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.669 189444 DEBUG nova.network.neutron [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Refreshing network info cache for port f5b2dabe-ea06-4461-8450-3d306c4cd300 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:14:28 compute-0 nova_compute[189440]: 2025-12-11 14:14:28.812 189444 DEBUG nova.network.neutron [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:14:29 compute-0 nova_compute[189440]: 2025-12-11 14:14:29.417 189444 DEBUG nova.network.neutron [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 11 14:14:29 compute-0 nova_compute[189440]: 2025-12-11 14:14:29.418 189444 DEBUG oslo_concurrency.lockutils [req-7f9f7d72-8bbd-4137-bebf-0bc2f57f1440 req-176512ae-7d02-4d76-925f-9945cfba6951 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:14:29 compute-0 podman[203650]: time="2025-12-11T14:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:14:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:14:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec 11 14:14:30 compute-0 nova_compute[189440]: 2025-12-11 14:14:30.611 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: ERROR   14:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: ERROR   14:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: ERROR   14:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: ERROR   14:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: ERROR   14:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:14:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:14:31 compute-0 nova_compute[189440]: 2025-12-11 14:14:31.440 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:35 compute-0 nova_compute[189440]: 2025-12-11 14:14:35.614 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:36 compute-0 nova_compute[189440]: 2025-12-11 14:14:36.444 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:14:36.964 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:14:37 compute-0 podman[245300]: 2025-12-11 14:14:37.5090811 +0000 UTC m=+0.096397262 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:14:40 compute-0 podman[245324]: 2025-12-11 14:14:40.499533248 +0000 UTC m=+0.094183459 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 11 14:14:40 compute-0 nova_compute[189440]: 2025-12-11 14:14:40.616 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:41 compute-0 nova_compute[189440]: 2025-12-11 14:14:41.333 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765462466.3312137, 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:14:41 compute-0 nova_compute[189440]: 2025-12-11 14:14:41.333 189444 INFO nova.compute.manager [-] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:14:41 compute-0 nova_compute[189440]: 2025-12-11 14:14:41.359 189444 DEBUG nova.compute.manager [None req-09a444e7-51cd-4ff3-84fb-77147278fb55 - - - - - -] [instance: 98ac36ce-e8cd-46e4-a0f0-a65f82ea4ce2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:14:41 compute-0 nova_compute[189440]: 2025-12-11 14:14:41.448 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:43 compute-0 podman[245343]: 2025-12-11 14:14:43.531583132 +0000 UTC m=+0.116830522 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 11 14:14:44 compute-0 podman[245362]: 2025-12-11 14:14:44.81055953 +0000 UTC m=+0.100585155 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 11 14:14:44 compute-0 podman[245363]: 2025-12-11 14:14:44.824021729 +0000 UTC m=+0.096144675 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, vcs-type=git, release=1214.1726694543, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec 11 14:14:45 compute-0 nova_compute[189440]: 2025-12-11 14:14:45.618 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:46 compute-0 nova_compute[189440]: 2025-12-11 14:14:46.452 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:46 compute-0 podman[245399]: 2025-12-11 14:14:46.51553955 +0000 UTC m=+0.093026700 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true)
Dec 11 14:14:50 compute-0 podman[245421]: 2025-12-11 14:14:50.561909498 +0000 UTC m=+0.146948620 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:14:50 compute-0 nova_compute[189440]: 2025-12-11 14:14:50.621 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:51 compute-0 nova_compute[189440]: 2025-12-11 14:14:51.454 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:52 compute-0 nova_compute[189440]: 2025-12-11 14:14:52.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:52 compute-0 nova_compute[189440]: 2025-12-11 14:14:52.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:14:54 compute-0 podman[245448]: 2025-12-11 14:14:54.487594794 +0000 UTC m=+0.084861039 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Dec 11 14:14:55 compute-0 nova_compute[189440]: 2025-12-11 14:14:55.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:55 compute-0 nova_compute[189440]: 2025-12-11 14:14:55.624 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:56 compute-0 nova_compute[189440]: 2025-12-11 14:14:56.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:56 compute-0 nova_compute[189440]: 2025-12-11 14:14:56.456 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:14:56 compute-0 podman[245468]: 2025-12-11 14:14:56.51409951 +0000 UTC m=+0.108617521 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:14:57 compute-0 nova_compute[189440]: 2025-12-11 14:14:57.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:14:57 compute-0 nova_compute[189440]: 2025-12-11 14:14:57.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:14:58 compute-0 nova_compute[189440]: 2025-12-11 14:14:58.372 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:14:58 compute-0 nova_compute[189440]: 2025-12-11 14:14:58.373 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:14:58 compute-0 nova_compute[189440]: 2025-12-11 14:14:58.373 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:14:59 compute-0 ovn_controller[97832]: 2025-12-11T14:14:59Z|00053|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 11 14:14:59 compute-0 podman[203650]: time="2025-12-11T14:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:14:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:14:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 11 14:15:00 compute-0 nova_compute[189440]: 2025-12-11 14:15:00.628 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: ERROR   14:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: ERROR   14:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: ERROR   14:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: ERROR   14:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: ERROR   14:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:15:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:15:01 compute-0 nova_compute[189440]: 2025-12-11 14:15:01.459 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.287 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updating instance_info_cache with network_info: [{"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.302 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.302 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.303 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.303 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.304 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.304 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.327 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.327 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.328 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.328 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.418 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.480 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.482 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.548 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.551 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.621 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.623 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.685 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.696 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.771 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.773 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.835 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.837 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.905 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.907 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.970 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:02 compute-0 nova_compute[189440]: 2025-12-11 14:15:02.984 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.044 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.045 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.132 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.135 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.215 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.217 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.275 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.644 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.646 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4797MB free_disk=72.32958602905273GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.646 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.647 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.737 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.738 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.738 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.738 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.739 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.824 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.842 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.873 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:15:03 compute-0 nova_compute[189440]: 2025-12-11 14:15:03.874 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:15:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:15:04.092 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:15:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:15:04.093 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:15:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:15:04.094 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:15:05 compute-0 nova_compute[189440]: 2025-12-11 14:15:05.630 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:06 compute-0 nova_compute[189440]: 2025-12-11 14:15:06.463 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:08 compute-0 podman[245529]: 2025-12-11 14:15:08.53604308 +0000 UTC m=+0.119640582 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:15:08 compute-0 nova_compute[189440]: 2025-12-11 14:15:08.871 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:08 compute-0 nova_compute[189440]: 2025-12-11 14:15:08.872 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:10 compute-0 nova_compute[189440]: 2025-12-11 14:15:10.634 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:11 compute-0 nova_compute[189440]: 2025-12-11 14:15:11.464 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:11 compute-0 podman[245551]: 2025-12-11 14:15:11.561563436 +0000 UTC m=+0.145133836 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:15:14 compute-0 podman[245569]: 2025-12-11 14:15:14.526719243 +0000 UTC m=+0.113510041 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 14:15:15 compute-0 podman[245588]: 2025-12-11 14:15:15.521307685 +0000 UTC m=+0.118189657 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 11 14:15:15 compute-0 podman[245589]: 2025-12-11 14:15:15.541185351 +0000 UTC m=+0.129420761 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, architecture=x86_64, release=1214.1726694543, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9)
Dec 11 14:15:15 compute-0 nova_compute[189440]: 2025-12-11 14:15:15.637 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:16 compute-0 nova_compute[189440]: 2025-12-11 14:15:16.467 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:17 compute-0 podman[245623]: 2025-12-11 14:15:17.518336268 +0000 UTC m=+0.112239510 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210)
Dec 11 14:15:20 compute-0 nova_compute[189440]: 2025-12-11 14:15:20.640 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:21 compute-0 nova_compute[189440]: 2025-12-11 14:15:21.470 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:21 compute-0 podman[245643]: 2025-12-11 14:15:21.544735468 +0000 UTC m=+0.127802463 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 14:15:25 compute-0 podman[245668]: 2025-12-11 14:15:25.538327855 +0000 UTC m=+0.125163317 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, io.buildah.version=1.33.7)
Dec 11 14:15:25 compute-0 nova_compute[189440]: 2025-12-11 14:15:25.642 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:26 compute-0 nova_compute[189440]: 2025-12-11 14:15:26.473 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:27 compute-0 podman[245688]: 2025-12-11 14:15:27.516546818 +0000 UTC m=+0.091853051 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:15:29 compute-0 podman[203650]: time="2025-12-11T14:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:15:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:15:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec 11 14:15:30 compute-0 nova_compute[189440]: 2025-12-11 14:15:30.645 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: ERROR   14:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: ERROR   14:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: ERROR   14:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: ERROR   14:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: ERROR   14:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:15:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:15:31 compute-0 nova_compute[189440]: 2025-12-11 14:15:31.475 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:35 compute-0 nova_compute[189440]: 2025-12-11 14:15:35.648 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:36 compute-0 nova_compute[189440]: 2025-12-11 14:15:36.479 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:39 compute-0 podman[245712]: 2025-12-11 14:15:39.534059593 +0000 UTC m=+0.116323441 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:15:40 compute-0 nova_compute[189440]: 2025-12-11 14:15:40.651 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:41 compute-0 nova_compute[189440]: 2025-12-11 14:15:41.481 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:42 compute-0 podman[245736]: 2025-12-11 14:15:42.555378235 +0000 UTC m=+0.133728577 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.985 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.985 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d460>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:42.999 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '081c0041-e68f-4fa9-8c7b-7139d21acf6b', 'name': 'vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.006 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'name': 'vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.011 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:15:43.013153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.021 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.028 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.035 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.037 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:15:43.037962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.078 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/cpu volume: 38450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.122 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/cpu volume: 34500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.177 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 45810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.179 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:15:43.179444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.232 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.233 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.234 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.280 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.281 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.282 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.330 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.331 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.331 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.332 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.333 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:15:43.332518) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/memory.usage volume: 48.984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.334 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.335 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.336 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.336 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.336 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:15:43.334345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.337 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.338 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:15:43.335875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:15:43.337485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.339 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.341 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.341 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.341 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.342 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.343 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.343 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:15:43.339553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.343 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.343 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.344 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.344 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:15:43.340938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:15:43.342284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:15:43.345004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.459 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 500931517 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.460 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 79030432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.461 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.latency volume: 61428410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.555 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 406025219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.556 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 74406979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.556 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 55584693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.646 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.648 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.649 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.653 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.654 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.654 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.654 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.655 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.655 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.655 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.656 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.656 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.657 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.658 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.658 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.658 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.658 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.659 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.659 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.659 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.660 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.660 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.661 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.662 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.662 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.662 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 41791488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.663 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.663 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:15:43.653195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.664 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.664 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.666 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.668 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.670 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 1759291958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.671 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 10306999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.671 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.672 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 1481953607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.672 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 9758476 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.673 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.674 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.674 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:15:43.657884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.675 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.677 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:15:43.661834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.678 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.678 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:15:43.669555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.679 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:15:43.677824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.680 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.680 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.681 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.681 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.682 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.682 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.683 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.683 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.684 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.684 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.685 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.685 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:15:43.680545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.686 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.686 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.686 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:15:43.685887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:15:43.688042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.690 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.690 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:15:43.689573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:15:43.691748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.693 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.694 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:15:43.693517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.695 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.696 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:15:43.695540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.697 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.697 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.697 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.698 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:15:43.697646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.698 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.698 14 DEBUG ceilometer.compute.pollsters [-] 081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.698 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.699 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.699 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.699 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.700 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.700 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.701 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.701 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:15:43.704 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:15:44 compute-0 podman[245757]: 2025-12-11 14:15:44.819678924 +0000 UTC m=+0.135423641 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:15:45 compute-0 nova_compute[189440]: 2025-12-11 14:15:45.654 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:46 compute-0 nova_compute[189440]: 2025-12-11 14:15:46.484 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:46 compute-0 podman[245776]: 2025-12-11 14:15:46.525232378 +0000 UTC m=+0.119914366 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 11 14:15:46 compute-0 podman[245777]: 2025-12-11 14:15:46.528048056 +0000 UTC m=+0.109824211 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 11 14:15:48 compute-0 podman[245813]: 2025-12-11 14:15:48.556270296 +0000 UTC m=+0.141240453 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Dec 11 14:15:50 compute-0 nova_compute[189440]: 2025-12-11 14:15:50.656 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:51 compute-0 nova_compute[189440]: 2025-12-11 14:15:51.487 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:52 compute-0 podman[245834]: 2025-12-11 14:15:52.696667306 +0000 UTC m=+0.166435812 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 14:15:53 compute-0 nova_compute[189440]: 2025-12-11 14:15:53.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:53 compute-0 nova_compute[189440]: 2025-12-11 14:15:53.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:15:55 compute-0 nova_compute[189440]: 2025-12-11 14:15:55.661 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:56 compute-0 nova_compute[189440]: 2025-12-11 14:15:56.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:56 compute-0 nova_compute[189440]: 2025-12-11 14:15:56.489 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:15:56 compute-0 podman[245860]: 2025-12-11 14:15:56.532410936 +0000 UTC m=+0.121235457 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:15:58 compute-0 nova_compute[189440]: 2025-12-11 14:15:58.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:15:58 compute-0 nova_compute[189440]: 2025-12-11 14:15:58.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:15:58 compute-0 podman[245881]: 2025-12-11 14:15:58.492284041 +0000 UTC m=+0.083091345 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:15:59 compute-0 nova_compute[189440]: 2025-12-11 14:15:59.408 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:15:59 compute-0 nova_compute[189440]: 2025-12-11 14:15:59.408 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:15:59 compute-0 nova_compute[189440]: 2025-12-11 14:15:59.408 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:15:59 compute-0 podman[203650]: time="2025-12-11T14:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:15:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:15:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Dec 11 14:16:00 compute-0 nova_compute[189440]: 2025-12-11 14:16:00.663 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: ERROR   14:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: ERROR   14:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: ERROR   14:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: ERROR   14:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: ERROR   14:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:16:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:16:01 compute-0 nova_compute[189440]: 2025-12-11 14:16:01.492 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:01 compute-0 nova_compute[189440]: 2025-12-11 14:16:01.723 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:16:01 compute-0 nova_compute[189440]: 2025-12-11 14:16:01.744 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:16:01 compute-0 nova_compute[189440]: 2025-12-11 14:16:01.745 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:16:01 compute-0 nova_compute[189440]: 2025-12-11 14:16:01.746 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:01 compute-0 nova_compute[189440]: 2025-12-11 14:16:01.747 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.263 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.264 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.264 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.264 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.399 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.495 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.496 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.577 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.579 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.657 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.660 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.723 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.732 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.795 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.796 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.870 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.871 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.938 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.939 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:02 compute-0 nova_compute[189440]: 2025-12-11 14:16:02.998 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.005 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.064 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.065 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.144 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.145 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.239 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.240 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.304 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.707 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.708 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4776MB free_disk=72.32958602905273GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.709 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.709 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.788 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.788 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.788 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.788 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.789 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.882 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.897 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.899 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:16:03 compute-0 nova_compute[189440]: 2025-12-11 14:16:03.899 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:04.094 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:04.095 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:04.100 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:04 compute-0 nova_compute[189440]: 2025-12-11 14:16:04.899 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:04 compute-0 nova_compute[189440]: 2025-12-11 14:16:04.899 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:05 compute-0 nova_compute[189440]: 2025-12-11 14:16:05.666 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:06 compute-0 nova_compute[189440]: 2025-12-11 14:16:06.495 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:10 compute-0 podman[245939]: 2025-12-11 14:16:10.540528237 +0000 UTC m=+0.123828938 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:16:10 compute-0 nova_compute[189440]: 2025-12-11 14:16:10.667 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:11 compute-0 nova_compute[189440]: 2025-12-11 14:16:11.499 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:13 compute-0 podman[245960]: 2025-12-11 14:16:13.54356469 +0000 UTC m=+0.126870312 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 14:16:15 compute-0 podman[245980]: 2025-12-11 14:16:15.480485057 +0000 UTC m=+0.083280870 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 11 14:16:15 compute-0 nova_compute[189440]: 2025-12-11 14:16:15.671 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:16 compute-0 nova_compute[189440]: 2025-12-11 14:16:16.501 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:17 compute-0 podman[245997]: 2025-12-11 14:16:17.531817943 +0000 UTC m=+0.115794992 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 11 14:16:17 compute-0 podman[245998]: 2025-12-11 14:16:17.554584808 +0000 UTC m=+0.132402698 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=edpm, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9)
Dec 11 14:16:19 compute-0 podman[246034]: 2025-12-11 14:16:19.523623008 +0000 UTC m=+0.103959553 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:16:20 compute-0 nova_compute[189440]: 2025-12-11 14:16:20.673 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:21 compute-0 nova_compute[189440]: 2025-12-11 14:16:21.505 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:23 compute-0 podman[246054]: 2025-12-11 14:16:23.585633624 +0000 UTC m=+0.162011999 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:16:25 compute-0 nova_compute[189440]: 2025-12-11 14:16:25.677 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.060 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.061 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.062 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.063 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.063 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.066 189444 INFO nova.compute.manager [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Terminating instance#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.068 189444 DEBUG nova.compute.manager [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:16:26 compute-0 kernel: tapb755009c-68 (unregistering): left promiscuous mode
Dec 11 14:16:26 compute-0 NetworkManager[56353]: <info>  [1765462586.1303] device (tapb755009c-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.156 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 ovn_controller[97832]: 2025-12-11T14:16:26Z|00054|binding|INFO|Releasing lport b755009c-68a9-44e9-96bc-c78ee69f1950 from this chassis (sb_readonly=0)
Dec 11 14:16:26 compute-0 ovn_controller[97832]: 2025-12-11T14:16:26Z|00055|binding|INFO|Setting lport b755009c-68a9-44e9-96bc-c78ee69f1950 down in Southbound
Dec 11 14:16:26 compute-0 ovn_controller[97832]: 2025-12-11T14:16:26Z|00056|binding|INFO|Removing iface tapb755009c-68 ovn-installed in OVS
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.162 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.169 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:0f:5b 192.168.0.45'], port_security=['fa:16:3e:5d:0f:5b 192.168.0.45'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5m7msfabwkqt-ial5xpuq4kr3-ljplzuufq3xt-port-g5qtq5s5dan5', 'neutron:cidrs': '192.168.0.45/24', 'neutron:device_id': '081c0041-e68f-4fa9-8c7b-7139d21acf6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5m7msfabwkqt-ial5xpuq4kr3-ljplzuufq3xt-port-g5qtq5s5dan5', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.242', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=b755009c-68a9-44e9-96bc-c78ee69f1950) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.172 106686 INFO neutron.agent.ovn.metadata.agent [-] Port b755009c-68a9-44e9-96bc-c78ee69f1950 in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d unbound from our chassis#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.174 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.195 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 11 14:16:26 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 37.276s CPU time.
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.208 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[14093846-b191-4505-aebb-0c67b5a2ed7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:16:26 compute-0 systemd-machined[155778]: Machine qemu-3-instance-00000003 terminated.
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.260 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[0c52ba13-abb6-4d42-9110-a75d29e43d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.267 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[d70a555e-8b3b-4176-95b2-7abdcad05035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.299 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.307 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.324 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[11947ac8-5813-46d2-91f6-653497854f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.359 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[77ede1a4-8aba-40d2-b936-36bb5856f1ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 34655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246105, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.381 189444 INFO nova.virt.libvirt.driver [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Instance destroyed successfully.#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.382 189444 DEBUG nova.objects.instance [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'resources' on Instance uuid 081c0041-e68f-4fa9-8c7b-7139d21acf6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.392 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[60b072de-f689-4832-a84e-f9a52cf1c476]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378129, 'tstamp': 378129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246113, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378131, 'tstamp': 378131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246113, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.393 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.395 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.398 189444 DEBUG nova.virt.libvirt.vif [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-ial5xpuq4kr3-ljplzuufq3xt-vnf-bfrygpn3e2cz',id=3,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:08:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-8y6uuoad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:08:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjQzNTc1NTAwNTY3NDM3NzAzMT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec 11 14:16:26 compute-0 nova_compute[189440]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjQzNTc1NTAwNTY3NDM3NzAzMT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY0MzU3NTUwMDU2NzQzNzcwMzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NDM1NzU1MDA1Njc0Mzc3MDMxPT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=081c0041-e68f-4fa9-8c7b-7139d21acf6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.398 189444 DEBUG nova.network.os_vif_util [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "b755009c-68a9-44e9-96bc-c78ee69f1950", "address": "fa:16:3e:5d:0f:5b", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb755009c-68", "ovs_interfaceid": "b755009c-68a9-44e9-96bc-c78ee69f1950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.400 189444 DEBUG nova.network.os_vif_util [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.400 189444 DEBUG os_vif [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.402 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.403 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb755009c-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.404 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.405 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.406 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.406 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.406 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.409 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.412 189444 INFO os_vif [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:0f:5b,bridge_name='br-int',has_traffic_filtering=True,id=b755009c-68a9-44e9-96bc-c78ee69f1950,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb755009c-68')#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.413 189444 INFO nova.virt.libvirt.driver [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Deleting instance files /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b_del#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.414 189444 INFO nova.virt.libvirt.driver [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Deletion of /var/lib/nova/instances/081c0041-e68f-4fa9-8c7b-7139d21acf6b_del complete#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.478 189444 INFO nova.compute.manager [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.479 189444 DEBUG oslo.service.loopingcall [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.480 189444 DEBUG nova.compute.manager [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.480 189444 DEBUG nova.network.neutron [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.571 189444 DEBUG nova.compute.manager [req-ce153e5e-c922-46de-bf9d-69155b7163ca req-3b4a41ac-2a25-4f44-9be0-af42338b02d5 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-vif-unplugged-b755009c-68a9-44e9-96bc-c78ee69f1950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.572 189444 DEBUG oslo_concurrency.lockutils [req-ce153e5e-c922-46de-bf9d-69155b7163ca req-3b4a41ac-2a25-4f44-9be0-af42338b02d5 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.572 189444 DEBUG oslo_concurrency.lockutils [req-ce153e5e-c922-46de-bf9d-69155b7163ca req-3b4a41ac-2a25-4f44-9be0-af42338b02d5 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.572 189444 DEBUG oslo_concurrency.lockutils [req-ce153e5e-c922-46de-bf9d-69155b7163ca req-3b4a41ac-2a25-4f44-9be0-af42338b02d5 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.572 189444 DEBUG nova.compute.manager [req-ce153e5e-c922-46de-bf9d-69155b7163ca req-3b4a41ac-2a25-4f44-9be0-af42338b02d5 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] No waiting events found dispatching network-vif-unplugged-b755009c-68a9-44e9-96bc-c78ee69f1950 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.573 189444 DEBUG nova.compute.manager [req-ce153e5e-c922-46de-bf9d-69155b7163ca req-3b4a41ac-2a25-4f44-9be0-af42338b02d5 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-vif-unplugged-b755009c-68a9-44e9-96bc-c78ee69f1950 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 11 14:16:26 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:16:26.398 189444 DEBUG nova.virt.libvirt.vif [None req-7fc5a91d-aa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.723 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:16:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:26.724 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:16:26 compute-0 nova_compute[189440]: 2025-12-11 14:16:26.726 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:27 compute-0 podman[246115]: 2025-12-11 14:16:27.544422295 +0000 UTC m=+0.116180152 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64)
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.682 189444 DEBUG nova.compute.manager [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-changed-b755009c-68a9-44e9-96bc-c78ee69f1950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.683 189444 DEBUG nova.compute.manager [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Refreshing instance network info cache due to event network-changed-b755009c-68a9-44e9-96bc-c78ee69f1950. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.683 189444 DEBUG oslo_concurrency.lockutils [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.684 189444 DEBUG oslo_concurrency.lockutils [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.684 189444 DEBUG nova.network.neutron [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Refreshing network info cache for port b755009c-68a9-44e9-96bc-c78ee69f1950 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.912 189444 INFO nova.network.neutron [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Port b755009c-68a9-44e9-96bc-c78ee69f1950 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.912 189444 DEBUG nova.network.neutron [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.950 189444 DEBUG oslo_concurrency.lockutils [req-6f4ec5fe-23d0-4227-9da9-c40efec98695 req-bcaf1fd9-2552-4ae2-9f1c-0c0c5bc66202 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-081c0041-e68f-4fa9-8c7b-7139d21acf6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.962 189444 DEBUG nova.network.neutron [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:16:27 compute-0 nova_compute[189440]: 2025-12-11 14:16:27.984 189444 INFO nova.compute.manager [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Took 1.50 seconds to deallocate network for instance.#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.039 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.040 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.157 189444 DEBUG nova.compute.provider_tree [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.186 189444 DEBUG nova.scheduler.client.report [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.219 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.260 189444 INFO nova.scheduler.client.report [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Deleted allocations for instance 081c0041-e68f-4fa9-8c7b-7139d21acf6b#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.365 189444 DEBUG oslo_concurrency.lockutils [None req-7fc5a91d-aa3d-40f2-89c3-aaa58c57657c 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.688 189444 DEBUG nova.compute.manager [req-81282390-1585-4fde-af6c-c55d35452ddf req-635bf9bd-6203-447e-b557-1780b5e1df44 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.689 189444 DEBUG oslo_concurrency.lockutils [req-81282390-1585-4fde-af6c-c55d35452ddf req-635bf9bd-6203-447e-b557-1780b5e1df44 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.689 189444 DEBUG oslo_concurrency.lockutils [req-81282390-1585-4fde-af6c-c55d35452ddf req-635bf9bd-6203-447e-b557-1780b5e1df44 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.690 189444 DEBUG oslo_concurrency.lockutils [req-81282390-1585-4fde-af6c-c55d35452ddf req-635bf9bd-6203-447e-b557-1780b5e1df44 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "081c0041-e68f-4fa9-8c7b-7139d21acf6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.690 189444 DEBUG nova.compute.manager [req-81282390-1585-4fde-af6c-c55d35452ddf req-635bf9bd-6203-447e-b557-1780b5e1df44 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] No waiting events found dispatching network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:16:28 compute-0 nova_compute[189440]: 2025-12-11 14:16:28.691 189444 WARNING nova.compute.manager [req-81282390-1585-4fde-af6c-c55d35452ddf req-635bf9bd-6203-447e-b557-1780b5e1df44 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Received unexpected event network-vif-plugged-b755009c-68a9-44e9-96bc-c78ee69f1950 for instance with vm_state deleted and task_state None.#033[00m
Dec 11 14:16:29 compute-0 podman[246134]: 2025-12-11 14:16:29.550745643 +0000 UTC m=+0.136347414 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:16:29 compute-0 podman[203650]: time="2025-12-11T14:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:16:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:16:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 11 14:16:30 compute-0 nova_compute[189440]: 2025-12-11 14:16:30.681 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:31 compute-0 nova_compute[189440]: 2025-12-11 14:16:31.405 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: ERROR   14:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: ERROR   14:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: ERROR   14:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: ERROR   14:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: ERROR   14:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:16:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:16:31 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:16:31.726 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:16:35 compute-0 nova_compute[189440]: 2025-12-11 14:16:35.684 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:36 compute-0 nova_compute[189440]: 2025-12-11 14:16:36.408 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:40 compute-0 nova_compute[189440]: 2025-12-11 14:16:40.687 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:41 compute-0 nova_compute[189440]: 2025-12-11 14:16:41.377 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765462586.3752644, 081c0041-e68f-4fa9-8c7b-7139d21acf6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:16:41 compute-0 nova_compute[189440]: 2025-12-11 14:16:41.378 189444 INFO nova.compute.manager [-] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:16:41 compute-0 nova_compute[189440]: 2025-12-11 14:16:41.409 189444 DEBUG nova.compute.manager [None req-4f73432a-e179-49ef-8d36-fc489cd9ea5f - - - - - -] [instance: 081c0041-e68f-4fa9-8c7b-7139d21acf6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:16:41 compute-0 nova_compute[189440]: 2025-12-11 14:16:41.410 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:41 compute-0 podman[246157]: 2025-12-11 14:16:41.489463592 +0000 UTC m=+0.088198190 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:16:44 compute-0 podman[246180]: 2025-12-11 14:16:44.521750286 +0000 UTC m=+0.117097754 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:16:45 compute-0 nova_compute[189440]: 2025-12-11 14:16:45.690 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:46 compute-0 nova_compute[189440]: 2025-12-11 14:16:46.413 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:46 compute-0 podman[246199]: 2025-12-11 14:16:46.5202517 +0000 UTC m=+0.103889762 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:16:48 compute-0 podman[246217]: 2025-12-11 14:16:48.514395989 +0000 UTC m=+0.096438531 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:16:48 compute-0 podman[246218]: 2025-12-11 14:16:48.521676866 +0000 UTC m=+0.103133744 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:16:50 compute-0 podman[246256]: 2025-12-11 14:16:50.570547319 +0000 UTC m=+0.155351927 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Dec 11 14:16:50 compute-0 nova_compute[189440]: 2025-12-11 14:16:50.692 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:51 compute-0 systemd-logind[786]: New session 29 of user zuul.
Dec 11 14:16:51 compute-0 nova_compute[189440]: 2025-12-11 14:16:51.417 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:51 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec 11 14:16:52 compute-0 python3[246455]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 14:16:54 compute-0 podman[246492]: 2025-12-11 14:16:54.544439425 +0000 UTC m=+0.133125284 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:16:55 compute-0 nova_compute[189440]: 2025-12-11 14:16:55.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:55 compute-0 nova_compute[189440]: 2025-12-11 14:16:55.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:16:55 compute-0 nova_compute[189440]: 2025-12-11 14:16:55.696 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:56 compute-0 nova_compute[189440]: 2025-12-11 14:16:56.420 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:16:57 compute-0 nova_compute[189440]: 2025-12-11 14:16:57.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:58 compute-0 nova_compute[189440]: 2025-12-11 14:16:58.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:58 compute-0 podman[246517]: 2025-12-11 14:16:58.530243783 +0000 UTC m=+0.129489646 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.471 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.472 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.473 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:16:59 compute-0 nova_compute[189440]: 2025-12-11 14:16:59.473 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:16:59 compute-0 podman[203650]: time="2025-12-11T14:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:16:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:16:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Dec 11 14:17:00 compute-0 ovn_controller[97832]: 2025-12-11T14:17:00Z|00057|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec 11 14:17:00 compute-0 podman[246538]: 2025-12-11 14:17:00.484560972 +0000 UTC m=+0.076129875 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:17:00 compute-0 nova_compute[189440]: 2025-12-11 14:17:00.699 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: ERROR   14:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: ERROR   14:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: ERROR   14:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: ERROR   14:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: ERROR   14:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:17:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:17:01 compute-0 nova_compute[189440]: 2025-12-11 14:17:01.424 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:01 compute-0 nova_compute[189440]: 2025-12-11 14:17:01.505 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:17:01 compute-0 nova_compute[189440]: 2025-12-11 14:17:01.531 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:17:01 compute-0 nova_compute[189440]: 2025-12-11 14:17:01.531 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:17:01 compute-0 nova_compute[189440]: 2025-12-11 14:17:01.533 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:17:04.095 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:17:04.096 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:17:04.097 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.270 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.271 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.271 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.272 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.374 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.450 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.451 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.517 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.518 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.590 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.592 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.653 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.662 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.727 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.728 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.799 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.801 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.869 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.870 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:04 compute-0 nova_compute[189440]: 2025-12-11 14:17:04.937 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.298 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.300 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4929MB free_disk=72.35206604003906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.300 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.301 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.452 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.453 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.455 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.455 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.547 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.562 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.601 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.601 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:05 compute-0 nova_compute[189440]: 2025-12-11 14:17:05.701 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:06 compute-0 nova_compute[189440]: 2025-12-11 14:17:06.428 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:06 compute-0 nova_compute[189440]: 2025-12-11 14:17:06.600 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:07 compute-0 nova_compute[189440]: 2025-12-11 14:17:07.372 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:07 compute-0 nova_compute[189440]: 2025-12-11 14:17:07.373 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:10 compute-0 nova_compute[189440]: 2025-12-11 14:17:10.703 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:11 compute-0 nova_compute[189440]: 2025-12-11 14:17:11.430 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:12 compute-0 podman[246588]: 2025-12-11 14:17:12.529530715 +0000 UTC m=+0.113646931 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.652 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.653 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.671 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.738 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.738 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.748 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:17:12 compute-0 nova_compute[189440]: 2025-12-11 14:17:12.748 189444 INFO nova.compute.claims [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:17:13 compute-0 nova_compute[189440]: 2025-12-11 14:17:13.274 189444 DEBUG nova.compute.provider_tree [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.672 189444 DEBUG nova.scheduler.client.report [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.700 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.701 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.825 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 11 14:17:14 compute-0 podman[246612]: 2025-12-11 14:17:14.829906717 +0000 UTC m=+0.135913363 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.840 189444 INFO nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.879 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.975 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.976 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.977 189444 INFO nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Creating image(s)#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.978 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.978 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.979 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.979 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "2d7c65d8bb86e8121bce6ece4bef12d64fb67e72" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:14 compute-0 nova_compute[189440]: 2025-12-11 14:17:14.980 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "2d7c65d8bb86e8121bce6ece4bef12d64fb67e72" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:15 compute-0 nova_compute[189440]: 2025-12-11 14:17:15.706 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.207 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.290 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.291 189444 DEBUG nova.virt.images [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] 387e0fc7-8558-4207-962d-1375e3941d5e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.292 189444 DEBUG nova.privsep.utils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.293 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.part /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.432 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.515 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.part /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.converted" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.521 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.595 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72.converted --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.596 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "2d7c65d8bb86e8121bce6ece4bef12d64fb67e72" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.610 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.702 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.704 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "2d7c65d8bb86e8121bce6ece4bef12d64fb67e72" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.705 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "2d7c65d8bb86e8121bce6ece4bef12d64fb67e72" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.721 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.783 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.784 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72,backing_fmt=raw /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.860 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72,backing_fmt=raw /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk 1073741824" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.861 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "2d7c65d8bb86e8121bce6ece4bef12d64fb67e72" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.861 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.936 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.937 189444 DEBUG nova.virt.disk.api [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Checking if we can resize image /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.937 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.995 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.996 189444 DEBUG nova.virt.disk.api [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Cannot resize image /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:17:16 compute-0 nova_compute[189440]: 2025-12-11 14:17:16.997 189444 DEBUG nova.objects.instance [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'migration_context' on Instance uuid 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.014 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.014 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.016 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.042 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.129 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.130 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.131 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.144 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.232 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.233 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.280 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.eph0 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.281 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.281 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.358 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.359 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.359 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Ensure instance console log exists: /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.360 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.360 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.361 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.363 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:16:58Z,direct_url=<?>,disk_format='qcow2',id=387e0fc7-8558-4207-962d-1375e3941d5e,min_disk=0,min_ram=0,name='fvt_testing_image',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:17:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '387e0fc7-8558-4207-962d-1375e3941d5e'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'size': 1, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.371 189444 WARNING nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.378 189444 DEBUG nova.virt.libvirt.host [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.379 189444 DEBUG nova.virt.libvirt.host [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.383 189444 DEBUG nova.virt.libvirt.host [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.384 189444 DEBUG nova.virt.libvirt.host [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.384 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.384 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:17:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='3c46d552-85fd-4d6a-8605-32df7579bbee',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-11T14:16:58Z,direct_url=<?>,disk_format='qcow2',id=387e0fc7-8558-4207-962d-1375e3941d5e,min_disk=0,min_ram=0,name='fvt_testing_image',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-11T14:17:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.385 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.385 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.385 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.385 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.385 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.386 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.386 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.386 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.386 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.386 189444 DEBUG nova.virt.hardware [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.392 189444 DEBUG nova.objects.instance [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.407 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <uuid>03287a0e-c7ac-454e-a7e7-81f9ba3f11bf</uuid>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <name>instance-00000005</name>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <memory>524288</memory>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:name>fvt_testing_server</nova:name>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:17:17</nova:creationTime>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:flavor name="fvt_testing_flavor">
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:memory>512</nova:memory>
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:ephemeral>1</nova:ephemeral>
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:user uuid="26c7a9a5c1c0404bb144cd3cba8ecf9f">admin</nova:user>
Dec 11 14:17:17 compute-0 nova_compute[189440]:        <nova:project uuid="9c30b62d3d094e1e8b410a2af9fd7d98">admin</nova:project>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="387e0fc7-8558-4207-962d-1375e3941d5e"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <nova:ports/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <system>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <entry name="serial">03287a0e-c7ac-454e-a7e7-81f9ba3f11bf</entry>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <entry name="uuid">03287a0e-c7ac-454e-a7e7-81f9ba3f11bf</entry>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </system>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <os>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </os>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <features>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </features>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.eph0"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <target dev="vdb" bus="virtio"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.config"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/console.log" append="off"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <video>
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </video>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:17:17 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:17:17 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:17:17 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:17:17 compute-0 nova_compute[189440]: </domain>
Dec 11 14:17:17 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.455 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.455 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.455 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.456 189444 INFO nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Using config drive#033[00m
Dec 11 14:17:17 compute-0 podman[246669]: 2025-12-11 14:17:17.487302198 +0000 UTC m=+0.081023775 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.595 189444 INFO nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Creating config drive at /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.config#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.602 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mjywk3w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:17:17 compute-0 nova_compute[189440]: 2025-12-11 14:17:17.745 189444 DEBUG oslo_concurrency.processutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mjywk3w" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:17:17 compute-0 systemd-machined[155778]: New machine qemu-5-instance-00000005.
Dec 11 14:17:17 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.415 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462638.414948, 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.417 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.419 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.420 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.424 189444 INFO nova.virt.libvirt.driver [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Instance spawned successfully.#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.424 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.439 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.449 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.455 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.456 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.456 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.456 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.457 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.457 189444 DEBUG nova.virt.libvirt.driver [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.467 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.467 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765462638.4169881, 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.467 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] VM Started (Lifecycle Event)#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.486 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.491 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.520 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.544 189444 INFO nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Took 3.57 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.544 189444 DEBUG nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.596 189444 INFO nova.compute.manager [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Took 5.88 seconds to build instance.#033[00m
Dec 11 14:17:18 compute-0 nova_compute[189440]: 2025-12-11 14:17:18.616 189444 DEBUG oslo_concurrency.lockutils [None req-a762b494-2117-4e66-8880-1ab5f89bab4f 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:19 compute-0 podman[246718]: 2025-12-11 14:17:19.507602824 +0000 UTC m=+0.094669597 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=)
Dec 11 14:17:19 compute-0 podman[246717]: 2025-12-11 14:17:19.528558654 +0000 UTC m=+0.117103923 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 14:17:19 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 11 14:17:19 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 11 14:17:20 compute-0 nova_compute[189440]: 2025-12-11 14:17:20.708 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:21 compute-0 nova_compute[189440]: 2025-12-11 14:17:21.435 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:21 compute-0 podman[246771]: 2025-12-11 14:17:21.556472957 +0000 UTC m=+0.139771847 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 11 14:17:25 compute-0 podman[246790]: 2025-12-11 14:17:25.596875524 +0000 UTC m=+0.185262045 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:17:25 compute-0 nova_compute[189440]: 2025-12-11 14:17:25.710 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:26 compute-0 nova_compute[189440]: 2025-12-11 14:17:26.439 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:29 compute-0 podman[246814]: 2025-12-11 14:17:29.533949454 +0000 UTC m=+0.127353645 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 11 14:17:29 compute-0 podman[203650]: time="2025-12-11T14:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:17:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:17:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec 11 14:17:30 compute-0 nova_compute[189440]: 2025-12-11 14:17:30.713 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: ERROR   14:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: ERROR   14:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: ERROR   14:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: ERROR   14:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: ERROR   14:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:17:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:17:31 compute-0 nova_compute[189440]: 2025-12-11 14:17:31.443 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:31 compute-0 podman[246835]: 2025-12-11 14:17:31.506897317 +0000 UTC m=+0.096692038 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:17:35 compute-0 nova_compute[189440]: 2025-12-11 14:17:35.717 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:36 compute-0 nova_compute[189440]: 2025-12-11 14:17:36.445 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:36 compute-0 nova_compute[189440]: 2025-12-11 14:17:36.995 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:36 compute-0 nova_compute[189440]: 2025-12-11 14:17:36.997 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:36 compute-0 nova_compute[189440]: 2025-12-11 14:17:36.998 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:36.999 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.000 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.003 189444 INFO nova.compute.manager [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Terminating instance#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.005 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "refresh_cache-03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.006 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquired lock "refresh_cache-03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.007 189444 DEBUG nova.network.neutron [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.480 189444 DEBUG nova.network.neutron [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:17:37 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.971 189444 DEBUG nova.network.neutron [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:37.999 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Releasing lock "refresh_cache-03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.001 189444 DEBUG nova.compute.manager [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:17:38 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 11 14:17:38 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.680s CPU time.
Dec 11 14:17:38 compute-0 systemd-machined[155778]: Machine qemu-5-instance-00000005 terminated.
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.286 189444 INFO nova.virt.libvirt.driver [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Instance destroyed successfully.#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.287 189444 DEBUG nova.objects.instance [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'resources' on Instance uuid 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.307 189444 INFO nova.virt.libvirt.driver [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Deleting instance files /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf_del#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.308 189444 INFO nova.virt.libvirt.driver [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Deletion of /var/lib/nova/instances/03287a0e-c7ac-454e-a7e7-81f9ba3f11bf_del complete#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.378 189444 INFO nova.compute.manager [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.379 189444 DEBUG oslo.service.loopingcall [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.379 189444 DEBUG nova.compute.manager [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:17:38 compute-0 nova_compute[189440]: 2025-12-11 14:17:38.380 189444 DEBUG nova.network.neutron [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.482 189444 DEBUG nova.network.neutron [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.513 189444 DEBUG nova.network.neutron [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.544 189444 INFO nova.compute.manager [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Took 1.16 seconds to deallocate network for instance.#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.614 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.615 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.755 189444 DEBUG nova.compute.provider_tree [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.802 189444 DEBUG nova.scheduler.client.report [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.892 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:39 compute-0 nova_compute[189440]: 2025-12-11 14:17:39.965 189444 INFO nova.scheduler.client.report [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Deleted allocations for instance 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf#033[00m
Dec 11 14:17:40 compute-0 nova_compute[189440]: 2025-12-11 14:17:40.025 189444 DEBUG oslo_concurrency.lockutils [None req-cac58338-cba6-4f7d-8328-9d772d4be877 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "03287a0e-c7ac-454e-a7e7-81f9ba3f11bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:17:40 compute-0 nova_compute[189440]: 2025-12-11 14:17:40.720 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:41 compute-0 nova_compute[189440]: 2025-12-11 14:17:41.448 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.986 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.987 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9cec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:17:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:42.996 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'name': 'vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.001 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.001 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:17:43.002717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.009 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.015 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.017 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:17:43.018133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.044 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/cpu volume: 36280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.071 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 47600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:17:43.074047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.099 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.100 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.101 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.132 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.132 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.133 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:17:43.135594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.136 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.136 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.138 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:17:43.138932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.139 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.140 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.141 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.142 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.142 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:17:43.141743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.144 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.144 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:17:43.145432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.145 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.146 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.148 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.149 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:17:43.148677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.150 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.151 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:17:43.152130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.152 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.153 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:17:43.155146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.155 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.156 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.156 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.157 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.157 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.158 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.159 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:17:43.160227) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.268 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 406025219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.268 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 74406979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.269 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 55584693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.401 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.402 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.402 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.404 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.404 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.404 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:17:43.403930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.405 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.405 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.405 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.406 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.407 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.407 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.408 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.408 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.408 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.409 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 41791488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.410 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.410 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.410 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.411 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.411 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:17:43.406765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.413 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 1481953607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:17:43.409729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.413 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 9758476 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.413 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.413 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.414 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.414 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:17:43.412933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.415 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.416 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:17:43.415744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.417 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.417 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.418 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.418 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:17:43.417344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.419 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.419 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.420 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.421 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:17:43.420954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.421 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.422 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.422 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.423 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.423 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.424 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:17:43.423448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.425 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:17:43.425352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.426 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.427 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:17:43.427486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.428 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.429 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.429 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:17:43.429238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:17:43.431610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.432 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.432 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.433 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.433 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.434 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.434 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.434 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.434 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.435 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:17:43.433555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:17:43.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:17:43 compute-0 podman[246873]: 2025-12-11 14:17:43.487902184 +0000 UTC m=+0.076295070 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:17:45 compute-0 podman[246897]: 2025-12-11 14:17:45.509218936 +0000 UTC m=+0.101842973 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:17:45 compute-0 nova_compute[189440]: 2025-12-11 14:17:45.722 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:46 compute-0 nova_compute[189440]: 2025-12-11 14:17:46.452 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:48 compute-0 podman[246919]: 2025-12-11 14:17:48.527619253 +0000 UTC m=+0.115180067 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:17:50 compute-0 podman[246940]: 2025-12-11 14:17:50.482089018 +0000 UTC m=+0.064016131 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 11 14:17:50 compute-0 podman[246941]: 2025-12-11 14:17:50.513219416 +0000 UTC m=+0.101144215 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Dec 11 14:17:50 compute-0 nova_compute[189440]: 2025-12-11 14:17:50.724 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:51 compute-0 nova_compute[189440]: 2025-12-11 14:17:51.456 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:52 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 11 14:17:52 compute-0 systemd[1]: session-29.scope: Consumed 1.223s CPU time.
Dec 11 14:17:52 compute-0 systemd-logind[786]: Session 29 logged out. Waiting for processes to exit.
Dec 11 14:17:52 compute-0 systemd-logind[786]: Removed session 29.
Dec 11 14:17:52 compute-0 podman[246977]: 2025-12-11 14:17:52.503628735 +0000 UTC m=+0.093714715 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 11 14:17:53 compute-0 nova_compute[189440]: 2025-12-11 14:17:53.283 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765462658.2829337, 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:17:53 compute-0 nova_compute[189440]: 2025-12-11 14:17:53.284 189444 INFO nova.compute.manager [-] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:17:53 compute-0 nova_compute[189440]: 2025-12-11 14:17:53.309 189444 DEBUG nova.compute.manager [None req-85123929-9215-4beb-8048-e47f15604628 - - - - - -] [instance: 03287a0e-c7ac-454e-a7e7-81f9ba3f11bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:17:55 compute-0 nova_compute[189440]: 2025-12-11 14:17:55.726 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:56 compute-0 nova_compute[189440]: 2025-12-11 14:17:56.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:56 compute-0 nova_compute[189440]: 2025-12-11 14:17:56.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:17:56 compute-0 nova_compute[189440]: 2025-12-11 14:17:56.460 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:17:56 compute-0 podman[246997]: 2025-12-11 14:17:56.581361373 +0000 UTC m=+0.163924245 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Dec 11 14:17:58 compute-0 nova_compute[189440]: 2025-12-11 14:17:58.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:59 compute-0 nova_compute[189440]: 2025-12-11 14:17:59.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:17:59 compute-0 podman[203650]: time="2025-12-11T14:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:17:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:17:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec 11 14:18:00 compute-0 podman[247022]: 2025-12-11 14:18:00.47273564 +0000 UTC m=+0.076380482 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350)
Dec 11 14:18:00 compute-0 nova_compute[189440]: 2025-12-11 14:18:00.730 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:01 compute-0 nova_compute[189440]: 2025-12-11 14:18:01.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:01 compute-0 nova_compute[189440]: 2025-12-11 14:18:01.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: ERROR   14:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: ERROR   14:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: ERROR   14:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: ERROR   14:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: ERROR   14:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:18:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:18:01 compute-0 nova_compute[189440]: 2025-12-11 14:18:01.463 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:01 compute-0 nova_compute[189440]: 2025-12-11 14:18:01.871 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:18:01 compute-0 nova_compute[189440]: 2025-12-11 14:18:01.872 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:18:01 compute-0 nova_compute[189440]: 2025-12-11 14:18:01.872 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:18:02 compute-0 podman[247042]: 2025-12-11 14:18:02.523149692 +0000 UTC m=+0.109411037 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:18:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:18:04.096 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:18:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:18:04.097 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:18:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:18:04.098 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.699 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.717 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.718 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.720 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.721 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.722 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:18:04 compute-0 nova_compute[189440]: 2025-12-11 14:18:04.739 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:18:05 compute-0 nova_compute[189440]: 2025-12-11 14:18:05.732 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.254 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.255 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.289 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.290 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.291 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.292 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.394 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.457 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.458 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.473 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.517 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.518 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.579 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.580 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.675 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.686 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.752 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.753 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.831 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.832 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.906 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.907 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:18:06 compute-0 nova_compute[189440]: 2025-12-11 14:18:06.982 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.528 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.530 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=72.32468795776367GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.530 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.531 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.631 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.632 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.632 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.633 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.656 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.676 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.677 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.698 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.737 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.831 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.851 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.878 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:18:07 compute-0 nova_compute[189440]: 2025-12-11 14:18:07.878 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:18:08 compute-0 nova_compute[189440]: 2025-12-11 14:18:08.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:08 compute-0 nova_compute[189440]: 2025-12-11 14:18:08.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:08 compute-0 nova_compute[189440]: 2025-12-11 14:18:08.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:10 compute-0 nova_compute[189440]: 2025-12-11 14:18:10.735 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:11 compute-0 nova_compute[189440]: 2025-12-11 14:18:11.476 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:14 compute-0 podman[247090]: 2025-12-11 14:18:14.486065748 +0000 UTC m=+0.074013714 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:18:14 compute-0 systemd-logind[786]: New session 30 of user zuul.
Dec 11 14:18:14 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec 11 14:18:15 compute-0 python3[247292]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 14:18:15 compute-0 nova_compute[189440]: 2025-12-11 14:18:15.739 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:16 compute-0 nova_compute[189440]: 2025-12-11 14:18:16.479 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:16 compute-0 podman[247331]: 2025-12-11 14:18:16.503081898 +0000 UTC m=+0.098826091 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 11 14:18:18 compute-0 nova_compute[189440]: 2025-12-11 14:18:18.250 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:18 compute-0 nova_compute[189440]: 2025-12-11 14:18:18.250 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:18:19 compute-0 podman[247350]: 2025-12-11 14:18:19.513434533 +0000 UTC m=+0.104214015 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202)
Dec 11 14:18:20 compute-0 nova_compute[189440]: 2025-12-11 14:18:20.742 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:21 compute-0 nova_compute[189440]: 2025-12-11 14:18:21.482 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:21 compute-0 podman[247370]: 2025-12-11 14:18:21.559143709 +0000 UTC m=+0.136162992 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:18:21 compute-0 podman[247371]: 2025-12-11 14:18:21.5628295 +0000 UTC m=+0.141246108 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 11 14:18:23 compute-0 podman[247554]: 2025-12-11 14:18:23.32433299 +0000 UTC m=+0.112471691 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 11 14:18:23 compute-0 python3[247600]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 14:18:25 compute-0 nova_compute[189440]: 2025-12-11 14:18:25.747 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:26 compute-0 nova_compute[189440]: 2025-12-11 14:18:26.484 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:27 compute-0 podman[247639]: 2025-12-11 14:18:27.579468446 +0000 UTC m=+0.158157539 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:18:29 compute-0 podman[203650]: time="2025-12-11T14:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:18:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:18:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec 11 14:18:30 compute-0 nova_compute[189440]: 2025-12-11 14:18:30.751 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: ERROR   14:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: ERROR   14:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: ERROR   14:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: ERROR   14:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: ERROR   14:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:18:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:18:31 compute-0 nova_compute[189440]: 2025-12-11 14:18:31.486 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:31 compute-0 podman[247666]: 2025-12-11 14:18:31.496940055 +0000 UTC m=+0.089302475 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec 11 14:18:33 compute-0 podman[247835]: 2025-12-11 14:18:33.076658388 +0000 UTC m=+0.083838248 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:18:33 compute-0 python3[247885]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 14:18:35 compute-0 nova_compute[189440]: 2025-12-11 14:18:35.756 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:36 compute-0 nova_compute[189440]: 2025-12-11 14:18:36.489 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:40 compute-0 nova_compute[189440]: 2025-12-11 14:18:40.757 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:41 compute-0 nova_compute[189440]: 2025-12-11 14:18:41.493 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:44 compute-0 podman[247923]: 2025-12-11 14:18:44.818284224 +0000 UTC m=+0.102967385 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:18:45 compute-0 nova_compute[189440]: 2025-12-11 14:18:45.759 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:46 compute-0 nova_compute[189440]: 2025-12-11 14:18:46.495 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:47 compute-0 podman[247946]: 2025-12-11 14:18:47.507508513 +0000 UTC m=+0.097886428 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:18:48 compute-0 python3[248138]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 14:18:50 compute-0 podman[248179]: 2025-12-11 14:18:50.509383976 +0000 UTC m=+0.090476943 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Dec 11 14:18:50 compute-0 nova_compute[189440]: 2025-12-11 14:18:50.762 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:51 compute-0 nova_compute[189440]: 2025-12-11 14:18:51.499 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:52 compute-0 podman[248198]: 2025-12-11 14:18:52.559147952 +0000 UTC m=+0.159489372 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:18:52 compute-0 podman[248199]: 2025-12-11 14:18:52.568360292 +0000 UTC m=+0.167332178 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 11 14:18:53 compute-0 podman[248235]: 2025-12-11 14:18:53.544310992 +0000 UTC m=+0.146030197 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0)
Dec 11 14:18:55 compute-0 nova_compute[189440]: 2025-12-11 14:18:55.766 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:56 compute-0 nova_compute[189440]: 2025-12-11 14:18:56.502 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:18:58 compute-0 nova_compute[189440]: 2025-12-11 14:18:58.250 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:58 compute-0 nova_compute[189440]: 2025-12-11 14:18:58.250 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:58 compute-0 nova_compute[189440]: 2025-12-11 14:18:58.250 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:18:58 compute-0 podman[248255]: 2025-12-11 14:18:58.619145819 +0000 UTC m=+0.201499548 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:18:59 compute-0 nova_compute[189440]: 2025-12-11 14:18:59.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:18:59 compute-0 rsyslogd[236802]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 14:18:59 compute-0 rsyslogd[236802]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 14:18:59 compute-0 podman[203650]: time="2025-12-11T14:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:18:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:18:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 11 14:19:00 compute-0 nova_compute[189440]: 2025-12-11 14:19:00.770 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: ERROR   14:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: ERROR   14:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: ERROR   14:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: ERROR   14:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: ERROR   14:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:19:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:19:01 compute-0 nova_compute[189440]: 2025-12-11 14:19:01.504 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:19:02 compute-0 podman[248283]: 2025-12-11 14:19:02.49329023 +0000 UTC m=+0.093137229 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter)
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.859 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.860 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.861 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:19:02 compute-0 nova_compute[189440]: 2025-12-11 14:19:02.862 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:19:03 compute-0 podman[248304]: 2025-12-11 14:19:03.540380082 +0000 UTC m=+0.120766758 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:19:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:19:04.097 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:19:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:19:04.098 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:19:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:19:04.099 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:19:04 compute-0 nova_compute[189440]: 2025-12-11 14:19:04.913 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:19:04 compute-0 nova_compute[189440]: 2025-12-11 14:19:04.929 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:19:04 compute-0 nova_compute[189440]: 2025-12-11 14:19:04.930 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:19:04 compute-0 nova_compute[189440]: 2025-12-11 14:19:04.932 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:05 compute-0 nova_compute[189440]: 2025-12-11 14:19:05.772 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:06 compute-0 nova_compute[189440]: 2025-12-11 14:19:06.508 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.270 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.271 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.272 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.272 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.383 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.473 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.475 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.540 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.542 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.635 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.636 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.697 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.709 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.787 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.789 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.855 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.858 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.925 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:07 compute-0 nova_compute[189440]: 2025-12-11 14:19:07.928 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.023 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.486 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.487 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4942MB free_disk=72.32468795776367GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.488 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.488 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.619 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.620 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.620 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.620 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.803 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.821 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.822 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:19:08 compute-0 nova_compute[189440]: 2025-12-11 14:19:08.823 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:19:09 compute-0 nova_compute[189440]: 2025-12-11 14:19:09.817 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:09 compute-0 nova_compute[189440]: 2025-12-11 14:19:09.839 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:10 compute-0 nova_compute[189440]: 2025-12-11 14:19:10.776 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:11 compute-0 nova_compute[189440]: 2025-12-11 14:19:11.511 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:15 compute-0 podman[248354]: 2025-12-11 14:19:15.517388728 +0000 UTC m=+0.095903698 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:19:15 compute-0 nova_compute[189440]: 2025-12-11 14:19:15.779 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:16 compute-0 nova_compute[189440]: 2025-12-11 14:19:16.514 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:18 compute-0 podman[248377]: 2025-12-11 14:19:18.526427359 +0000 UTC m=+0.121470615 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 14:19:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 11 14:19:20 compute-0 nova_compute[189440]: 2025-12-11 14:19:20.781 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:21 compute-0 nova_compute[189440]: 2025-12-11 14:19:21.517 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:21 compute-0 podman[248396]: 2025-12-11 14:19:21.522985368 +0000 UTC m=+0.110036370 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:19:23 compute-0 podman[248414]: 2025-12-11 14:19:23.502691072 +0000 UTC m=+0.099424447 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 11 14:19:23 compute-0 podman[248415]: 2025-12-11 14:19:23.509687116 +0000 UTC m=+0.104140754 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:19:24 compute-0 podman[248452]: 2025-12-11 14:19:24.566328406 +0000 UTC m=+0.144079479 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:19:25 compute-0 nova_compute[189440]: 2025-12-11 14:19:25.784 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:26 compute-0 nova_compute[189440]: 2025-12-11 14:19:26.520 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:29 compute-0 podman[248471]: 2025-12-11 14:19:29.526394066 +0000 UTC m=+0.125414354 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:19:29 compute-0 podman[203650]: time="2025-12-11T14:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:19:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:19:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 11 14:19:30 compute-0 nova_compute[189440]: 2025-12-11 14:19:30.788 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: ERROR   14:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: ERROR   14:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: ERROR   14:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: ERROR   14:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: ERROR   14:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:19:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:19:31 compute-0 nova_compute[189440]: 2025-12-11 14:19:31.523 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:33 compute-0 podman[248498]: 2025-12-11 14:19:33.525462068 +0000 UTC m=+0.108419471 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Dec 11 14:19:34 compute-0 podman[248518]: 2025-12-11 14:19:34.47258444 +0000 UTC m=+0.068010344 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:19:35 compute-0 nova_compute[189440]: 2025-12-11 14:19:35.791 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:36 compute-0 nova_compute[189440]: 2025-12-11 14:19:36.526 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:40 compute-0 nova_compute[189440]: 2025-12-11 14:19:40.793 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:41 compute-0 nova_compute[189440]: 2025-12-11 14:19:41.528 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.987 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.987 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9d0a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'name': 'vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {'metering.server_group': 'f7b42205-1b4f-49eb-9f02-9c04957c72b4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.010 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '82437023-b24d-48bf-af1c-d1957df4da67', 'name': 'test_0', 'flavor': {'id': '1d6c0fe6-4c75-4860-b5c4-bc55bee577e2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '714a3758-ec97-4149-8cfb-208787ab3704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'user_id': '26c7a9a5c1c0404bb144cd3cba8ecf9f', 'hostId': '8a504434530a65f668c2ad533f19949d33f95823474d944cbd1da4c3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:19:43.011938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.020 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.024 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:19:43.025472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.052 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/cpu volume: 37950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.074 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/cpu volume: 49320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:19:43.076222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.101 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.101 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.101 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.124 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.124 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.124 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.125 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:19:43.125717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:19:43.126964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:19:43.128099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.129 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.130 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:19:43.129645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.130 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.131 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.131 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:19:43.131160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.132 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.133 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:19:43.132603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.134 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:19:43.134184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.134 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.135 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.135 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.135 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.135 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.136 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.136 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:19:43.136680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.217 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 406025219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.218 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 74406979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.219 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.latency volume: 55584693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.311 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 414087761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.311 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 86850533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.311 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.latency volume: 54519228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.312 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.313 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.313 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.313 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.313 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.314 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:19:43.312906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.314 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.316 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.316 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.316 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.316 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.317 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.317 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 41791488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.318 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.319 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.319 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:19:43.316046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:19:43.318454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.319 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.321 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 1481953607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.321 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 9758476 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.321 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.321 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 1535528083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.322 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 13914030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.322 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:19:43.320959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:19:43.323286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.325 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:19:43.324895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.325 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.325 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.326 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.326 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.326 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.328 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.328 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:19:43.328051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:19:43.330236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.332 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.333 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.333 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.335 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:19:43.332579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:19:43.334301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.336 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:19:43.335484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.336 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:19:43.337329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.337 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.339 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:19:43.338836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.339 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.339 14 DEBUG ceilometer.compute.pollsters [-] 125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.339 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.340 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.340 14 DEBUG ceilometer.compute.pollsters [-] 82437023-b24d-48bf-af1c-d1957df4da67/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:19:43.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:19:45 compute-0 nova_compute[189440]: 2025-12-11 14:19:45.796 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:46 compute-0 podman[248540]: 2025-12-11 14:19:46.503932936 +0000 UTC m=+0.101251522 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:19:46 compute-0 nova_compute[189440]: 2025-12-11 14:19:46.531 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:48 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 11 14:19:48 compute-0 systemd[1]: session-30.scope: Consumed 4.193s CPU time.
Dec 11 14:19:48 compute-0 systemd-logind[786]: Session 30 logged out. Waiting for processes to exit.
Dec 11 14:19:48 compute-0 systemd-logind[786]: Removed session 30.
Dec 11 14:19:49 compute-0 podman[248565]: 2025-12-11 14:19:49.523414456 +0000 UTC m=+0.118090052 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 14:19:50 compute-0 nova_compute[189440]: 2025-12-11 14:19:50.798 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:51 compute-0 nova_compute[189440]: 2025-12-11 14:19:51.534 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:52 compute-0 podman[248588]: 2025-12-11 14:19:52.503144707 +0000 UTC m=+0.092792201 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 11 14:19:54 compute-0 podman[248608]: 2025-12-11 14:19:54.521254394 +0000 UTC m=+0.096356041 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec 11 14:19:54 compute-0 podman[248607]: 2025-12-11 14:19:54.525333836 +0000 UTC m=+0.105478497 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 14:19:55 compute-0 podman[248645]: 2025-12-11 14:19:55.482719173 +0000 UTC m=+0.081382847 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:19:55 compute-0 nova_compute[189440]: 2025-12-11 14:19:55.802 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:56 compute-0 nova_compute[189440]: 2025-12-11 14:19:56.536 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:19:59 compute-0 nova_compute[189440]: 2025-12-11 14:19:59.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:19:59 compute-0 podman[203650]: time="2025-12-11T14:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:19:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:19:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec 11 14:20:00 compute-0 nova_compute[189440]: 2025-12-11 14:20:00.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:00 compute-0 nova_compute[189440]: 2025-12-11 14:20:00.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:00 compute-0 nova_compute[189440]: 2025-12-11 14:20:00.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:20:00 compute-0 podman[248662]: 2025-12-11 14:20:00.540862442 +0000 UTC m=+0.140632623 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 11 14:20:00 compute-0 nova_compute[189440]: 2025-12-11 14:20:00.805 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: ERROR   14:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: ERROR   14:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: ERROR   14:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: ERROR   14:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: ERROR   14:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:20:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:20:01 compute-0 nova_compute[189440]: 2025-12-11 14:20:01.544 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:04.099 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:04.100 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:04.101 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:04 compute-0 nova_compute[189440]: 2025-12-11 14:20:04.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:04 compute-0 nova_compute[189440]: 2025-12-11 14:20:04.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:20:04 compute-0 podman[248687]: 2025-12-11 14:20:04.531585925 +0000 UTC m=+0.120818509 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec 11 14:20:04 compute-0 podman[248707]: 2025-12-11 14:20:04.607123986 +0000 UTC m=+0.076287041 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:20:04 compute-0 nova_compute[189440]: 2025-12-11 14:20:04.908 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:20:04 compute-0 nova_compute[189440]: 2025-12-11 14:20:04.908 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:20:04 compute-0 nova_compute[189440]: 2025-12-11 14:20:04.908 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:20:05 compute-0 nova_compute[189440]: 2025-12-11 14:20:05.808 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:06 compute-0 nova_compute[189440]: 2025-12-11 14:20:06.547 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:08 compute-0 nova_compute[189440]: 2025-12-11 14:20:08.922 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:20:08 compute-0 nova_compute[189440]: 2025-12-11 14:20:08.942 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:20:08 compute-0 nova_compute[189440]: 2025-12-11 14:20:08.943 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:20:08 compute-0 nova_compute[189440]: 2025-12-11 14:20:08.944 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:08 compute-0 nova_compute[189440]: 2025-12-11 14:20:08.945 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.278 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.279 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.279 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.279 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.401 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.503 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.505 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.578 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.580 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.664 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.666 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.746 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.757 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.834 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.837 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.905 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:09 compute-0 nova_compute[189440]: 2025-12-11 14:20:09.907 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.009 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.011 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.092 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.536 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.538 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4945MB free_disk=72.3246841430664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.539 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.539 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.630 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.630 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.631 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.631 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.718 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.734 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.736 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.736 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:10 compute-0 nova_compute[189440]: 2025-12-11 14:20:10.811 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:11 compute-0 nova_compute[189440]: 2025-12-11 14:20:11.549 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:12 compute-0 nova_compute[189440]: 2025-12-11 14:20:12.735 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:20:15 compute-0 nova_compute[189440]: 2025-12-11 14:20:15.815 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:16 compute-0 nova_compute[189440]: 2025-12-11 14:20:16.552 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:17 compute-0 podman[248756]: 2025-12-11 14:20:17.483424738 +0000 UTC m=+0.078324681 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:20:20 compute-0 podman[248781]: 2025-12-11 14:20:20.523823983 +0000 UTC m=+0.111325964 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:20:20 compute-0 nova_compute[189440]: 2025-12-11 14:20:20.818 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:21 compute-0 nova_compute[189440]: 2025-12-11 14:20:21.556 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:23 compute-0 podman[248801]: 2025-12-11 14:20:23.51824701 +0000 UTC m=+0.117108117 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 11 14:20:25 compute-0 podman[248822]: 2025-12-11 14:20:25.52719058 +0000 UTC m=+0.110561034 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, container_name=kepler, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:20:25 compute-0 podman[248821]: 2025-12-11 14:20:25.547283641 +0000 UTC m=+0.132736176 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 11 14:20:25 compute-0 podman[248858]: 2025-12-11 14:20:25.666451788 +0000 UTC m=+0.108963984 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:20:25 compute-0 nova_compute[189440]: 2025-12-11 14:20:25.820 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:26 compute-0 nova_compute[189440]: 2025-12-11 14:20:26.560 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:29 compute-0 podman[203650]: time="2025-12-11T14:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:20:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:20:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec 11 14:20:30 compute-0 nova_compute[189440]: 2025-12-11 14:20:30.824 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: ERROR   14:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: ERROR   14:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: ERROR   14:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: ERROR   14:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: ERROR   14:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:20:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:20:31 compute-0 nova_compute[189440]: 2025-12-11 14:20:31.562 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:31 compute-0 podman[248877]: 2025-12-11 14:20:31.580891477 +0000 UTC m=+0.155184385 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 11 14:20:35 compute-0 podman[248906]: 2025-12-11 14:20:35.509872681 +0000 UTC m=+0.098314808 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 11 14:20:35 compute-0 podman[248907]: 2025-12-11 14:20:35.529433919 +0000 UTC m=+0.116313477 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:20:35 compute-0 nova_compute[189440]: 2025-12-11 14:20:35.828 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:36 compute-0 nova_compute[189440]: 2025-12-11 14:20:36.564 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:40 compute-0 nova_compute[189440]: 2025-12-11 14:20:40.831 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:41 compute-0 nova_compute[189440]: 2025-12-11 14:20:41.568 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:45 compute-0 nova_compute[189440]: 2025-12-11 14:20:45.834 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:46 compute-0 nova_compute[189440]: 2025-12-11 14:20:46.571 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:48 compute-0 podman[248951]: 2025-12-11 14:20:48.530170866 +0000 UTC m=+0.119639859 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.711 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.712 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.713 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.713 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.714 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.718 189444 INFO nova.compute.manager [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Terminating instance#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.721 189444 DEBUG nova.compute.manager [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.838 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:50 compute-0 kernel: tapffab0c4b-81 (unregistering): left promiscuous mode
Dec 11 14:20:50 compute-0 NetworkManager[56353]: <info>  [1765462850.9045] device (tapffab0c4b-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.912 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:50 compute-0 ovn_controller[97832]: 2025-12-11T14:20:50Z|00058|binding|INFO|Releasing lport ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 from this chassis (sb_readonly=0)
Dec 11 14:20:50 compute-0 ovn_controller[97832]: 2025-12-11T14:20:50Z|00059|binding|INFO|Setting lport ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 down in Southbound
Dec 11 14:20:50 compute-0 ovn_controller[97832]: 2025-12-11T14:20:50Z|00060|binding|INFO|Removing iface tapffab0c4b-81 ovn-installed in OVS
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.916 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:50 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:50.924 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:de:bd 192.168.0.232'], port_security=['fa:16:3e:64:de:bd 192.168.0.232'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5m7msfabwkqt-eaftnsicx5k4-rixmquahxbge-port-zv45recekdib', 'neutron:cidrs': '192.168.0.232/24', 'neutron:device_id': '125c0574-9fcf-4ecf-9bd8-c4008826d3b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5m7msfabwkqt-eaftnsicx5k4-rixmquahxbge-port-zv45recekdib', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:20:50 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:50.926 106686 INFO neutron.agent.ovn.metadata.agent [-] Port ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d unbound from our chassis#033[00m
Dec 11 14:20:50 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:50.927 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d#033[00m
Dec 11 14:20:50 compute-0 nova_compute[189440]: 2025-12-11 14:20:50.928 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:50 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:50.944 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[afac15bc-cc6c-41e1-9a2e-e030e3b4cbc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:20:50 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 11 14:20:50 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 58.531s CPU time.
Dec 11 14:20:50 compute-0 systemd-machined[155778]: Machine qemu-4-instance-00000004 terminated.
Dec 11 14:20:50 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:50.976 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[1154d8df-c28a-418c-a89b-ba25e97f65ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:20:50 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:50.980 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[ece1a2a0-9e91-4575-a40c-83eb8de52e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:20:51 compute-0 podman[248978]: 2025-12-11 14:20:51.009876158 +0000 UTC m=+0.086281368 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.014 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[52e043d7-efb6-4fc5-b743-690eb3780a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.030 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[d48cfaa9-3f61-43e1-af3e-b5ceaf6c6507]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62eb1d54-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:cc:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378116, 'reachable_time': 43713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249007, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.046 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4d729d-1cbf-491e-b8de-2578b5e3efed]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378129, 'tstamp': 378129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249008, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62eb1d54-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378131, 'tstamp': 378131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249008, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.048 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.050 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.056 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.057 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62eb1d54-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.058 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.058 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62eb1d54-30, col_values=(('external_ids', {'iface-id': 'dd9a733c-26da-4e0b-928d-1f82d21083bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:20:51 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:51.059 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.219 189444 INFO nova.virt.libvirt.driver [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Instance destroyed successfully.#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.219 189444 DEBUG nova.objects.instance [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'resources' on Instance uuid 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.233 189444 DEBUG nova.virt.libvirt.vif [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:10:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fabwkqt-eaftnsicx5k4-rixmquahxbge-vnf-ds3cqz5lxzrr',id=4,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:10:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='f7b42205-1b4f-49eb-9f02-9c04957c72b4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-a9gcnjo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:10:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI0MTMzNzExMzk0MjE2MDc2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec 11 14:20:51 compute-0 nova_compute[189440]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI0MTMzNzExMzk0MjE2MDc2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyNDEzMzcxMTM5NDIxNjA3NjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjQxMzM3MTEzOTQyMTYwNzY2PT0tLQo=',user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=125c0574-9fcf-4ecf-9bd8-c4008826d3b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.233 189444 DEBUG nova.network.os_vif_util [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.234 189444 DEBUG nova.network.os_vif_util [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.235 189444 DEBUG os_vif [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.236 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.236 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffab0c4b-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.238 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.240 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.243 189444 INFO os_vif [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:de:bd,bridge_name='br-int',has_traffic_filtering=True,id=ffab0c4b-81ca-4416-acb2-bf5d1b973fc7,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapffab0c4b-81')#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.244 189444 INFO nova.virt.libvirt.driver [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Deleting instance files /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3_del#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.245 189444 INFO nova.virt.libvirt.driver [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Deletion of /var/lib/nova/instances/125c0574-9fcf-4ecf-9bd8-c4008826d3b3_del complete#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.330 189444 INFO nova.compute.manager [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.330 189444 DEBUG oslo.service.loopingcall [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.330 189444 DEBUG nova.compute.manager [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:20:51 compute-0 nova_compute[189440]: 2025-12-11 14:20:51.331 189444 DEBUG nova.network.neutron [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:20:51 compute-0 rsyslogd[236802]: message too long (8192) with configured size 8096, begin of message is: 2025-12-11 14:20:51.233 189444 DEBUG nova.virt.libvirt.vif [None req-54d366a5-06 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:20:54 compute-0 podman[249032]: 2025-12-11 14:20:54.513441603 +0000 UTC m=+0.105464568 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.566 189444 DEBUG nova.compute.manager [req-e774adf8-ea1b-435c-850d-b95db8bea63f req-487fee93-592c-4dc4-8046-f168f0819085 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-vif-unplugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.568 189444 DEBUG oslo_concurrency.lockutils [req-e774adf8-ea1b-435c-850d-b95db8bea63f req-487fee93-592c-4dc4-8046-f168f0819085 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.568 189444 DEBUG oslo_concurrency.lockutils [req-e774adf8-ea1b-435c-850d-b95db8bea63f req-487fee93-592c-4dc4-8046-f168f0819085 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.569 189444 DEBUG oslo_concurrency.lockutils [req-e774adf8-ea1b-435c-850d-b95db8bea63f req-487fee93-592c-4dc4-8046-f168f0819085 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.570 189444 DEBUG nova.compute.manager [req-e774adf8-ea1b-435c-850d-b95db8bea63f req-487fee93-592c-4dc4-8046-f168f0819085 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] No waiting events found dispatching network-vif-unplugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.571 189444 DEBUG nova.compute.manager [req-e774adf8-ea1b-435c-850d-b95db8bea63f req-487fee93-592c-4dc4-8046-f168f0819085 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-vif-unplugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 11 14:20:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:54.625 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:20:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:54.625 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.629 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.636 189444 DEBUG nova.compute.manager [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-changed-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.637 189444 DEBUG nova.compute.manager [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Refreshing instance network info cache due to event network-changed-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.638 189444 DEBUG oslo_concurrency.lockutils [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.639 189444 DEBUG oslo_concurrency.lockutils [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:20:54 compute-0 nova_compute[189440]: 2025-12-11 14:20:54.639 189444 DEBUG nova.network.neutron [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Refreshing network info cache for port ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:20:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:20:55.629 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:20:55 compute-0 nova_compute[189440]: 2025-12-11 14:20:55.842 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.249 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:20:56 compute-0 podman[249053]: 2025-12-11 14:20:56.514398624 +0000 UTC m=+0.106099382 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Dec 11 14:20:56 compute-0 podman[249051]: 2025-12-11 14:20:56.531759176 +0000 UTC m=+0.112334887 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 11 14:20:56 compute-0 podman[249052]: 2025-12-11 14:20:56.533551791 +0000 UTC m=+0.114260566 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git)
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.674 189444 DEBUG nova.compute.manager [req-5dddb18e-8402-47d0-af1d-4a4bc9021890 req-79f068a6-4c60-486b-aa03-382bb6235df1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.674 189444 DEBUG oslo_concurrency.lockutils [req-5dddb18e-8402-47d0-af1d-4a4bc9021890 req-79f068a6-4c60-486b-aa03-382bb6235df1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.675 189444 DEBUG oslo_concurrency.lockutils [req-5dddb18e-8402-47d0-af1d-4a4bc9021890 req-79f068a6-4c60-486b-aa03-382bb6235df1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.675 189444 DEBUG oslo_concurrency.lockutils [req-5dddb18e-8402-47d0-af1d-4a4bc9021890 req-79f068a6-4c60-486b-aa03-382bb6235df1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.675 189444 DEBUG nova.compute.manager [req-5dddb18e-8402-47d0-af1d-4a4bc9021890 req-79f068a6-4c60-486b-aa03-382bb6235df1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] No waiting events found dispatching network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:20:56 compute-0 nova_compute[189440]: 2025-12-11 14:20:56.675 189444 WARNING nova.compute.manager [req-5dddb18e-8402-47d0-af1d-4a4bc9021890 req-79f068a6-4c60-486b-aa03-382bb6235df1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Received unexpected event network-vif-plugged-ffab0c4b-81ca-4416-acb2-bf5d1b973fc7 for instance with vm_state active and task_state deleting.#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.069 189444 DEBUG nova.network.neutron [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.098 189444 INFO nova.compute.manager [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Took 6.77 seconds to deallocate network for instance.#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.163 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.164 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.360 189444 DEBUG nova.network.neutron [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updated VIF entry in instance network info cache for port ffab0c4b-81ca-4416-acb2-bf5d1b973fc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.361 189444 DEBUG nova.network.neutron [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Updating instance_info_cache with network_info: [{"id": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "address": "fa:16:3e:64:de:bd", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffab0c4b-81", "ovs_interfaceid": "ffab0c4b-81ca-4416-acb2-bf5d1b973fc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.375 189444 DEBUG nova.compute.provider_tree [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.387 189444 DEBUG nova.scheduler.client.report [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.395 189444 DEBUG oslo_concurrency.lockutils [req-8ebf5fcf-19be-4048-83b2-d6b981930deb req-6a4ae08b-fdee-4783-91aa-a10c13f495ce a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-125c0574-9fcf-4ecf-9bd8-c4008826d3b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.440 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.473 189444 INFO nova.scheduler.client.report [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Deleted allocations for instance 125c0574-9fcf-4ecf-9bd8-c4008826d3b3#033[00m
Dec 11 14:20:58 compute-0 nova_compute[189440]: 2025-12-11 14:20:58.537 189444 DEBUG oslo_concurrency.lockutils [None req-54d366a5-0649-4bc5-9700-cb143e551725 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "125c0574-9fcf-4ecf-9bd8-c4008826d3b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:20:59 compute-0 podman[203650]: time="2025-12-11T14:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:20:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:20:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec 11 14:21:00 compute-0 nova_compute[189440]: 2025-12-11 14:21:00.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:00 compute-0 nova_compute[189440]: 2025-12-11 14:21:00.845 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:01 compute-0 nova_compute[189440]: 2025-12-11 14:21:01.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:01 compute-0 nova_compute[189440]: 2025-12-11 14:21:01.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:01 compute-0 nova_compute[189440]: 2025-12-11 14:21:01.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:21:01 compute-0 nova_compute[189440]: 2025-12-11 14:21:01.253 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: ERROR   14:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: ERROR   14:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: ERROR   14:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: ERROR   14:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: ERROR   14:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:21:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:21:02 compute-0 podman[249109]: 2025-12-11 14:21:02.522409118 +0000 UTC m=+0.117595619 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Dec 11 14:21:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:04.100 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:04.101 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:04.102 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:04 compute-0 nova_compute[189440]: 2025-12-11 14:21:04.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:04 compute-0 nova_compute[189440]: 2025-12-11 14:21:04.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:21:04 compute-0 nova_compute[189440]: 2025-12-11 14:21:04.238 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:21:05 compute-0 nova_compute[189440]: 2025-12-11 14:21:05.286 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:21:05 compute-0 nova_compute[189440]: 2025-12-11 14:21:05.286 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:21:05 compute-0 nova_compute[189440]: 2025-12-11 14:21:05.287 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:21:05 compute-0 nova_compute[189440]: 2025-12-11 14:21:05.287 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:21:05 compute-0 nova_compute[189440]: 2025-12-11 14:21:05.848 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:06 compute-0 nova_compute[189440]: 2025-12-11 14:21:06.216 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765462851.2148893, 125c0574-9fcf-4ecf-9bd8-c4008826d3b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:21:06 compute-0 nova_compute[189440]: 2025-12-11 14:21:06.216 189444 INFO nova.compute.manager [-] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:21:06 compute-0 nova_compute[189440]: 2025-12-11 14:21:06.235 189444 DEBUG nova.compute.manager [None req-64a0b9e6-5d6d-478f-a033-cbb545daf1b9 - - - - - -] [instance: 125c0574-9fcf-4ecf-9bd8-c4008826d3b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:21:06 compute-0 nova_compute[189440]: 2025-12-11 14:21:06.256 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:06 compute-0 podman[249135]: 2025-12-11 14:21:06.52028021 +0000 UTC m=+0.105428496 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:21:06 compute-0 podman[249134]: 2025-12-11 14:21:06.520394203 +0000 UTC m=+0.121766223 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Dec 11 14:21:09 compute-0 nova_compute[189440]: 2025-12-11 14:21:09.139 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [{"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:21:09 compute-0 nova_compute[189440]: 2025-12-11 14:21:09.276 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-82437023-b24d-48bf-af1c-d1957df4da67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:21:09 compute-0 nova_compute[189440]: 2025-12-11 14:21:09.276 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:21:09 compute-0 nova_compute[189440]: 2025-12-11 14:21:09.277 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:09 compute-0 nova_compute[189440]: 2025-12-11 14:21:09.277 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.264 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.301 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.302 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.302 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.303 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.430 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.534 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.536 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.559 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.561 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.562 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.563 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.563 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.567 189444 INFO nova.compute.manager [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Terminating instance#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.570 189444 DEBUG nova.compute.manager [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.601 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.603 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:21:10 compute-0 kernel: tape82f4978-3a (unregistering): left promiscuous mode
Dec 11 14:21:10 compute-0 NetworkManager[56353]: <info>  [1765462870.6347] device (tape82f4978-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.651 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 ovn_controller[97832]: 2025-12-11T14:21:10Z|00061|binding|INFO|Releasing lport e82f4978-3a5a-4e23-8c30-c60478cd656f from this chassis (sb_readonly=0)
Dec 11 14:21:10 compute-0 ovn_controller[97832]: 2025-12-11T14:21:10Z|00062|binding|INFO|Setting lport e82f4978-3a5a-4e23-8c30-c60478cd656f down in Southbound
Dec 11 14:21:10 compute-0 ovn_controller[97832]: 2025-12-11T14:21:10Z|00063|binding|INFO|Removing iface tape82f4978-3a ovn-installed in OVS
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.667 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:10.675 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ac:fb 192.168.0.20'], port_security=['fa:16:3e:4a:ac:fb 192.168.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.20/24', 'neutron:device_id': '82437023-b24d-48bf-af1c-d1957df4da67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c30b62d3d094e1e8b410a2af9fd7d98', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d7aa95c-a649-4fd4-9e5a-18c0b6217450', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d8798ec-229b-449a-9c37-334c24aa485f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=e82f4978-3a5a-4e23-8c30-c60478cd656f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:21:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:10.677 106686 INFO neutron.agent.ovn.metadata.agent [-] Port e82f4978-3a5a-4e23-8c30-c60478cd656f in datapath 62eb1d54-32e6-4ea5-8151-f2c97214c84d unbound from our chassis#033[00m
Dec 11 14:21:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:10.679 106686 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62eb1d54-32e6-4ea5-8151-f2c97214c84d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 11 14:21:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:10.681 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbea7ed-15b5-43b6-8df0-34708c41be6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:10.682 106686 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d namespace which is not needed anymore#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.691 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.710 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.712 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:21:10 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 11 14:21:10 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 17.650s CPU time.
Dec 11 14:21:10 compute-0 systemd-machined[155778]: Machine qemu-1-instance-00000001 terminated.
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.791 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.807 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.819 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.850 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.881 189444 INFO nova.virt.libvirt.driver [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Instance destroyed successfully.#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.882 189444 DEBUG nova.objects.instance [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lazy-loading 'resources' on Instance uuid 82437023-b24d-48bf-af1c-d1957df4da67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:21:10 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [NOTICE]   (239972) : haproxy version is 2.8.14-c23fe91
Dec 11 14:21:10 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [NOTICE]   (239972) : path to executable is /usr/sbin/haproxy
Dec 11 14:21:10 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [WARNING]  (239972) : Exiting Master process...
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.900 189444 DEBUG nova.virt.libvirt.vif [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:01:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='714a3758-ec97-4149-8cfb-208787ab3704',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:01:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c30b62d3d094e1e8b410a2af9fd7d98',ramdisk_id='',reservation_id='r-o1lpin9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='714a3758-ec97-4149-8cfb-208787ab3704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:01:58Z,user_data=None,user_id='26c7a9a5c1c0404bb144cd3cba8ecf9f',uuid=82437023-b24d-48bf-af1c-d1957df4da67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:21:10 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [ALERT]    (239972) : Current worker (239974) exited with code 143 (Terminated)
Dec 11 14:21:10 compute-0 neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d[239968]: [WARNING]  (239972) : All workers exited. Exiting... (0)
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.901 189444 DEBUG nova.network.os_vif_util [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converting VIF {"id": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "address": "fa:16:3e:4a:ac:fb", "network": {"id": "62eb1d54-32e6-4ea5-8151-f2c97214c84d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c30b62d3d094e1e8b410a2af9fd7d98", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape82f4978-3a", "ovs_interfaceid": "e82f4978-3a5a-4e23-8c30-c60478cd656f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.902 189444 DEBUG nova.network.os_vif_util [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.902 189444 DEBUG os_vif [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.904 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.905 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape82f4978-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:21:10 compute-0 systemd[1]: libpod-c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda.scope: Deactivated successfully.
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.906 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.909 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:10 compute-0 podman[249222]: 2025-12-11 14:21:10.910397848 +0000 UTC m=+0.073696996 container died c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.914 189444 INFO os_vif [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:ac:fb,bridge_name='br-int',has_traffic_filtering=True,id=e82f4978-3a5a-4e23-8c30-c60478cd656f,network=Network(62eb1d54-32e6-4ea5-8151-f2c97214c84d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape82f4978-3a')#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.915 189444 INFO nova.virt.libvirt.driver [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Deleting instance files /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67_del#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.917 189444 INFO nova.virt.libvirt.driver [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Deletion of /var/lib/nova/instances/82437023-b24d-48bf-af1c-d1957df4da67_del complete#033[00m
Dec 11 14:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda-userdata-shm.mount: Deactivated successfully.
Dec 11 14:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b1a1cb75262f298eeb24e9112e6eb20d6013c5279ecc8fc3521423bd6fa0484-merged.mount: Deactivated successfully.
Dec 11 14:21:10 compute-0 podman[249222]: 2025-12-11 14:21:10.96105382 +0000 UTC m=+0.124352958 container cleanup c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.968 189444 INFO nova.compute.manager [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.968 189444 DEBUG oslo.service.loopingcall [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.968 189444 DEBUG nova.compute.manager [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:21:10 compute-0 nova_compute[189440]: 2025-12-11 14:21:10.969 189444 DEBUG nova.network.neutron [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:21:10 compute-0 systemd[1]: libpod-conmon-c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda.scope: Deactivated successfully.
Dec 11 14:21:11 compute-0 podman[249263]: 2025-12-11 14:21:11.078915444 +0000 UTC m=+0.077619874 container remove c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.098 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[9f22a29d-d35f-4cea-9940-7a281d96cee2]: (4, ('Thu Dec 11 02:21:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d (c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda)\nc272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda\nThu Dec 11 02:21:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d (c272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda)\nc272ebad9636bcebeabf0b226ad31ee23dff657343892f92b3c0f63f9b056dda\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.099 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[6685df7d-77cd-4f1a-8641-c2d9b1cf06fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.100 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62eb1d54-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.102 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:11 compute-0 kernel: tap62eb1d54-30: left promiscuous mode
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.119 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.122 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[154363b5-cdd2-45d4-9742-2e946a89a39c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.139 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a06c17c6-7aef-47aa-9a09-da135b67f767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.141 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[45c057b6-44e7-48dc-96a7-4fe7ce8c0465]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.159 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[52141445-1eea-40e8-817e-1cc0fb84cdd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378105, 'reachable_time': 33094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249277, 'error': None, 'target': 'ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d62eb1d54\x2d32e6\x2d4ea5\x2d8151\x2df2c97214c84d.mount: Deactivated successfully.
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.179 106799 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62eb1d54-32e6-4ea5-8151-f2c97214c84d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 11 14:21:11 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:21:11.180 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[c276d771-1122-4323-a76b-a88b866db4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.289 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.290 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=72.34666061401367GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.291 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.291 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.383 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 82437023-b24d-48bf-af1c-d1957df4da67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.384 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.385 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.442 189444 DEBUG nova.compute.manager [req-5bca979f-0a71-4eeb-bb55-df43a4906d07 req-c4752010-6113-4de4-aef4-efc80b4275e3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-vif-unplugged-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.443 189444 DEBUG oslo_concurrency.lockutils [req-5bca979f-0a71-4eeb-bb55-df43a4906d07 req-c4752010-6113-4de4-aef4-efc80b4275e3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.444 189444 DEBUG oslo_concurrency.lockutils [req-5bca979f-0a71-4eeb-bb55-df43a4906d07 req-c4752010-6113-4de4-aef4-efc80b4275e3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.445 189444 DEBUG oslo_concurrency.lockutils [req-5bca979f-0a71-4eeb-bb55-df43a4906d07 req-c4752010-6113-4de4-aef4-efc80b4275e3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.446 189444 DEBUG nova.compute.manager [req-5bca979f-0a71-4eeb-bb55-df43a4906d07 req-c4752010-6113-4de4-aef4-efc80b4275e3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] No waiting events found dispatching network-vif-unplugged-e82f4978-3a5a-4e23-8c30-c60478cd656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.447 189444 DEBUG nova.compute.manager [req-5bca979f-0a71-4eeb-bb55-df43a4906d07 req-c4752010-6113-4de4-aef4-efc80b4275e3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-vif-unplugged-e82f4978-3a5a-4e23-8c30-c60478cd656f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.455 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.484 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.513 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:21:11 compute-0 nova_compute[189440]: 2025-12-11 14:21:11.514 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.267 189444 DEBUG nova.network.neutron [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.326 189444 INFO nova.compute.manager [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Took 1.36 seconds to deallocate network for instance.#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.365 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.366 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.426 189444 DEBUG nova.compute.provider_tree [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.441 189444 DEBUG nova.scheduler.client.report [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.462 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.488 189444 INFO nova.scheduler.client.report [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Deleted allocations for instance 82437023-b24d-48bf-af1c-d1957df4da67#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.510 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.511 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:21:12 compute-0 nova_compute[189440]: 2025-12-11 14:21:12.557 189444 DEBUG oslo_concurrency.lockutils [None req-8d502874-4db7-465d-846e-bece2d5a478d 26c7a9a5c1c0404bb144cd3cba8ecf9f 9c30b62d3d094e1e8b410a2af9fd7d98 - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.529 189444 DEBUG nova.compute.manager [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.531 189444 DEBUG oslo_concurrency.lockutils [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "82437023-b24d-48bf-af1c-d1957df4da67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.531 189444 DEBUG oslo_concurrency.lockutils [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.532 189444 DEBUG oslo_concurrency.lockutils [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "82437023-b24d-48bf-af1c-d1957df4da67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.533 189444 DEBUG nova.compute.manager [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] No waiting events found dispatching network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.534 189444 WARNING nova.compute.manager [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received unexpected event network-vif-plugged-e82f4978-3a5a-4e23-8c30-c60478cd656f for instance with vm_state deleted and task_state None.#033[00m
Dec 11 14:21:13 compute-0 nova_compute[189440]: 2025-12-11 14:21:13.535 189444 DEBUG nova.compute.manager [req-85601aef-cfde-46b3-91ef-84b4227cb58d req-f5efed5c-eab5-44d0-97e6-9f7b5f3ac62e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Received event network-vif-deleted-e82f4978-3a5a-4e23-8c30-c60478cd656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:21:15 compute-0 nova_compute[189440]: 2025-12-11 14:21:15.852 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:15 compute-0 nova_compute[189440]: 2025-12-11 14:21:15.909 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:19 compute-0 podman[249281]: 2025-12-11 14:21:19.492905078 +0000 UTC m=+0.082651579 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:21:20 compute-0 nova_compute[189440]: 2025-12-11 14:21:20.854 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:20 compute-0 nova_compute[189440]: 2025-12-11 14:21:20.912 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:21 compute-0 podman[249307]: 2025-12-11 14:21:21.482659807 +0000 UTC m=+0.076465639 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 14:21:25 compute-0 podman[249328]: 2025-12-11 14:21:25.485981285 +0000 UTC m=+0.083370339 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:21:25 compute-0 nova_compute[189440]: 2025-12-11 14:21:25.856 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:25 compute-0 nova_compute[189440]: 2025-12-11 14:21:25.877 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765462870.8754003, 82437023-b24d-48bf-af1c-d1957df4da67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:21:25 compute-0 nova_compute[189440]: 2025-12-11 14:21:25.877 189444 INFO nova.compute.manager [-] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:21:25 compute-0 nova_compute[189440]: 2025-12-11 14:21:25.914 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:25 compute-0 nova_compute[189440]: 2025-12-11 14:21:25.932 189444 DEBUG nova.compute.manager [None req-e1b185d7-1f56-4e0d-98f6-fd355a75cf7c - - - - - -] [instance: 82437023-b24d-48bf-af1c-d1957df4da67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:21:27 compute-0 podman[249346]: 2025-12-11 14:21:27.479661539 +0000 UTC m=+0.074785498 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 11 14:21:27 compute-0 podman[249347]: 2025-12-11 14:21:27.511499833 +0000 UTC m=+0.091415143 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 11 14:21:27 compute-0 podman[249348]: 2025-12-11 14:21:27.527111532 +0000 UTC m=+0.104587192 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 11 14:21:29 compute-0 podman[203650]: time="2025-12-11T14:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:21:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:21:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 11 14:21:30 compute-0 nova_compute[189440]: 2025-12-11 14:21:30.859 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:30 compute-0 nova_compute[189440]: 2025-12-11 14:21:30.917 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: ERROR   14:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: ERROR   14:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: ERROR   14:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: ERROR   14:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: ERROR   14:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:21:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:21:33 compute-0 podman[249401]: 2025-12-11 14:21:33.560956909 +0000 UTC m=+0.150692633 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:21:35 compute-0 nova_compute[189440]: 2025-12-11 14:21:35.862 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:35 compute-0 nova_compute[189440]: 2025-12-11 14:21:35.920 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:37 compute-0 podman[249427]: 2025-12-11 14:21:37.513548004 +0000 UTC m=+0.099269314 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:21:37 compute-0 podman[249426]: 2025-12-11 14:21:37.541491593 +0000 UTC m=+0.133830994 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 11 14:21:40 compute-0 nova_compute[189440]: 2025-12-11 14:21:40.864 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:40 compute-0 nova_compute[189440]: 2025-12-11 14:21:40.922 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:42 compute-0 ovn_controller[97832]: 2025-12-11T14:21:42Z|00064|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.988 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.989 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc9ce60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:21:43.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:21:45 compute-0 nova_compute[189440]: 2025-12-11 14:21:45.867 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:45 compute-0 nova_compute[189440]: 2025-12-11 14:21:45.925 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:50 compute-0 podman[249471]: 2025-12-11 14:21:50.473734251 +0000 UTC m=+0.071077899 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:21:50 compute-0 nova_compute[189440]: 2025-12-11 14:21:50.871 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:50 compute-0 nova_compute[189440]: 2025-12-11 14:21:50.928 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:52 compute-0 podman[249497]: 2025-12-11 14:21:52.48939531 +0000 UTC m=+0.084778903 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 11 14:21:55 compute-0 nova_compute[189440]: 2025-12-11 14:21:55.873 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:55 compute-0 nova_compute[189440]: 2025-12-11 14:21:55.932 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:21:56 compute-0 podman[249518]: 2025-12-11 14:21:56.47887014 +0000 UTC m=+0.080918208 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 11 14:21:58 compute-0 podman[249538]: 2025-12-11 14:21:58.545393865 +0000 UTC m=+0.140969668 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 14:21:58 compute-0 podman[249539]: 2025-12-11 14:21:58.557422238 +0000 UTC m=+0.131357494 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, io.buildah.version=1.29.0)
Dec 11 14:21:58 compute-0 podman[249546]: 2025-12-11 14:21:58.581540684 +0000 UTC m=+0.152398235 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:21:59 compute-0 podman[203650]: time="2025-12-11T14:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:21:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:21:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec 11 14:22:00 compute-0 nova_compute[189440]: 2025-12-11 14:22:00.875 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:00 compute-0 nova_compute[189440]: 2025-12-11 14:22:00.934 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:01 compute-0 nova_compute[189440]: 2025-12-11 14:22:01.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: ERROR   14:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: ERROR   14:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: ERROR   14:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: ERROR   14:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: ERROR   14:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:22:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:22:02 compute-0 nova_compute[189440]: 2025-12-11 14:22:02.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:02 compute-0 nova_compute[189440]: 2025-12-11 14:22:02.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:02 compute-0 nova_compute[189440]: 2025-12-11 14:22:02.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:22:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:22:04.101 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:22:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:22:04.101 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:22:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:22:04.101 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:22:04 compute-0 nova_compute[189440]: 2025-12-11 14:22:04.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:04 compute-0 nova_compute[189440]: 2025-12-11 14:22:04.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:22:04 compute-0 nova_compute[189440]: 2025-12-11 14:22:04.238 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:22:04 compute-0 nova_compute[189440]: 2025-12-11 14:22:04.259 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:22:04 compute-0 nova_compute[189440]: 2025-12-11 14:22:04.260 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:04 compute-0 podman[249595]: 2025-12-11 14:22:04.545900492 +0000 UTC m=+0.132889791 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:22:05 compute-0 nova_compute[189440]: 2025-12-11 14:22:05.878 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:05 compute-0 nova_compute[189440]: 2025-12-11 14:22:05.935 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:08 compute-0 podman[249622]: 2025-12-11 14:22:08.483648365 +0000 UTC m=+0.072697358 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:22:08 compute-0 podman[249621]: 2025-12-11 14:22:08.529856068 +0000 UTC m=+0.132605874 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 11 14:22:09 compute-0 nova_compute[189440]: 2025-12-11 14:22:09.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:10 compute-0 nova_compute[189440]: 2025-12-11 14:22:10.881 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:10 compute-0 nova_compute[189440]: 2025-12-11 14:22:10.937 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:11 compute-0 nova_compute[189440]: 2025-12-11 14:22:11.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.268 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.269 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.269 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.269 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.696 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.697 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5384MB free_disk=72.36851119995117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.698 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.698 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.771 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.772 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.816 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.831 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.854 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:22:12 compute-0 nova_compute[189440]: 2025-12-11 14:22:12.855 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:22:15 compute-0 nova_compute[189440]: 2025-12-11 14:22:15.885 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:15 compute-0 nova_compute[189440]: 2025-12-11 14:22:15.939 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:20 compute-0 nova_compute[189440]: 2025-12-11 14:22:20.889 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:20 compute-0 nova_compute[189440]: 2025-12-11 14:22:20.942 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:21 compute-0 podman[249663]: 2025-12-11 14:22:21.524396228 +0000 UTC m=+0.113995002 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:22:23 compute-0 podman[249686]: 2025-12-11 14:22:23.496765015 +0000 UTC m=+0.097231655 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 11 14:22:25 compute-0 nova_compute[189440]: 2025-12-11 14:22:25.892 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:25 compute-0 nova_compute[189440]: 2025-12-11 14:22:25.945 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:27 compute-0 podman[249706]: 2025-12-11 14:22:27.465472716 +0000 UTC m=+0.067333319 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:22:29 compute-0 podman[249726]: 2025-12-11 14:22:29.493988343 +0000 UTC m=+0.076870433 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:22:29 compute-0 podman[249728]: 2025-12-11 14:22:29.502375819 +0000 UTC m=+0.081673701 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:22:29 compute-0 podman[249727]: 2025-12-11 14:22:29.526182242 +0000 UTC m=+0.098754270 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 11 14:22:29 compute-0 podman[203650]: time="2025-12-11T14:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:22:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:22:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Dec 11 14:22:30 compute-0 nova_compute[189440]: 2025-12-11 14:22:30.894 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:30 compute-0 nova_compute[189440]: 2025-12-11 14:22:30.947 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: ERROR   14:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: ERROR   14:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: ERROR   14:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: ERROR   14:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: ERROR   14:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:22:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:22:35 compute-0 podman[249776]: 2025-12-11 14:22:35.511514312 +0000 UTC m=+0.113756665 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 11 14:22:35 compute-0 nova_compute[189440]: 2025-12-11 14:22:35.898 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:35 compute-0 nova_compute[189440]: 2025-12-11 14:22:35.949 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:39 compute-0 podman[249803]: 2025-12-11 14:22:39.514510826 +0000 UTC m=+0.091075601 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:22:39 compute-0 podman[249802]: 2025-12-11 14:22:39.520376919 +0000 UTC m=+0.103955255 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Dec 11 14:22:40 compute-0 nova_compute[189440]: 2025-12-11 14:22:40.900 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:40 compute-0 nova_compute[189440]: 2025-12-11 14:22:40.951 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:45 compute-0 nova_compute[189440]: 2025-12-11 14:22:45.902 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:45 compute-0 nova_compute[189440]: 2025-12-11 14:22:45.953 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:50 compute-0 nova_compute[189440]: 2025-12-11 14:22:50.906 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:50 compute-0 nova_compute[189440]: 2025-12-11 14:22:50.955 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:51 compute-0 nova_compute[189440]: 2025-12-11 14:22:51.399 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:52 compute-0 podman[249849]: 2025-12-11 14:22:52.500640536 +0000 UTC m=+0.093637084 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:22:54 compute-0 podman[249874]: 2025-12-11 14:22:54.469945734 +0000 UTC m=+0.070125658 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 11 14:22:55 compute-0 nova_compute[189440]: 2025-12-11 14:22:55.909 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:55 compute-0 nova_compute[189440]: 2025-12-11 14:22:55.957 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:22:58 compute-0 nova_compute[189440]: 2025-12-11 14:22:58.386 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:22:58 compute-0 podman[249893]: 2025-12-11 14:22:58.477609421 +0000 UTC m=+0.078672757 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:22:59 compute-0 podman[203650]: time="2025-12-11T14:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:22:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:22:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Dec 11 14:23:00 compute-0 podman[249914]: 2025-12-11 14:23:00.503435434 +0000 UTC m=+0.087485642 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:23:00 compute-0 podman[249913]: 2025-12-11 14:23:00.519338034 +0000 UTC m=+0.101516337 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, version=9.4, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 11 14:23:00 compute-0 podman[249912]: 2025-12-11 14:23:00.523985057 +0000 UTC m=+0.106253232 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 11 14:23:00 compute-0 nova_compute[189440]: 2025-12-11 14:23:00.911 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:00 compute-0 nova_compute[189440]: 2025-12-11 14:23:00.959 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: ERROR   14:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: ERROR   14:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: ERROR   14:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: ERROR   14:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: ERROR   14:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:23:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:23:02 compute-0 nova_compute[189440]: 2025-12-11 14:23:02.262 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:03 compute-0 nova_compute[189440]: 2025-12-11 14:23:03.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:03 compute-0 nova_compute[189440]: 2025-12-11 14:23:03.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:03 compute-0 nova_compute[189440]: 2025-12-11 14:23:03.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:23:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:23:04.103 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:23:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:23:04.104 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:23:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:23:04.104 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:23:05 compute-0 nova_compute[189440]: 2025-12-11 14:23:05.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:05 compute-0 nova_compute[189440]: 2025-12-11 14:23:05.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:05 compute-0 nova_compute[189440]: 2025-12-11 14:23:05.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:23:05 compute-0 nova_compute[189440]: 2025-12-11 14:23:05.259 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:23:05 compute-0 nova_compute[189440]: 2025-12-11 14:23:05.913 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:05 compute-0 nova_compute[189440]: 2025-12-11 14:23:05.962 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:06 compute-0 nova_compute[189440]: 2025-12-11 14:23:06.261 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:06 compute-0 nova_compute[189440]: 2025-12-11 14:23:06.262 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:23:06 compute-0 nova_compute[189440]: 2025-12-11 14:23:06.263 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:23:06 compute-0 nova_compute[189440]: 2025-12-11 14:23:06.283 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:23:06 compute-0 podman[249965]: 2025-12-11 14:23:06.421218652 +0000 UTC m=+0.107743369 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 11 14:23:10 compute-0 nova_compute[189440]: 2025-12-11 14:23:10.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:10 compute-0 podman[249990]: 2025-12-11 14:23:10.481116127 +0000 UTC m=+0.068066048 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:23:10 compute-0 podman[249989]: 2025-12-11 14:23:10.49313784 +0000 UTC m=+0.090299881 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Dec 11 14:23:10 compute-0 nova_compute[189440]: 2025-12-11 14:23:10.916 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:10 compute-0 nova_compute[189440]: 2025-12-11 14:23:10.965 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.231 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.251 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.277 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.278 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.279 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.279 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.695 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.696 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5386MB free_disk=72.36851119995117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.696 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.697 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.862 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.862 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.877 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.894 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.894 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.908 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.928 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.954 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.971 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.974 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:23:12 compute-0 nova_compute[189440]: 2025-12-11 14:23:12.975 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:23:14 compute-0 nova_compute[189440]: 2025-12-11 14:23:14.959 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:15 compute-0 nova_compute[189440]: 2025-12-11 14:23:15.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:15 compute-0 nova_compute[189440]: 2025-12-11 14:23:15.920 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:15 compute-0 nova_compute[189440]: 2025-12-11 14:23:15.967 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:20 compute-0 nova_compute[189440]: 2025-12-11 14:23:20.922 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:20 compute-0 nova_compute[189440]: 2025-12-11 14:23:20.970 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:23 compute-0 podman[250035]: 2025-12-11 14:23:23.468298793 +0000 UTC m=+0.062700156 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:23:24 compute-0 nova_compute[189440]: 2025-12-11 14:23:24.255 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:23:24 compute-0 nova_compute[189440]: 2025-12-11 14:23:24.255 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:23:25 compute-0 podman[250059]: 2025-12-11 14:23:25.478210296 +0000 UTC m=+0.079909787 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:23:25 compute-0 nova_compute[189440]: 2025-12-11 14:23:25.925 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:25 compute-0 nova_compute[189440]: 2025-12-11 14:23:25.973 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:29 compute-0 podman[250081]: 2025-12-11 14:23:29.484971311 +0000 UTC m=+0.069010210 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:23:29 compute-0 podman[203650]: time="2025-12-11T14:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:23:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:23:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 11 14:23:30 compute-0 nova_compute[189440]: 2025-12-11 14:23:30.928 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:30 compute-0 nova_compute[189440]: 2025-12-11 14:23:30.975 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: ERROR   14:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: ERROR   14:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: ERROR   14:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: ERROR   14:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: ERROR   14:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:23:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:23:31 compute-0 podman[250100]: 2025-12-11 14:23:31.482413239 +0000 UTC m=+0.082523851 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:23:31 compute-0 podman[250107]: 2025-12-11 14:23:31.492978907 +0000 UTC m=+0.076210066 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 11 14:23:31 compute-0 podman[250101]: 2025-12-11 14:23:31.495576051 +0000 UTC m=+0.086377705 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec 11 14:23:35 compute-0 nova_compute[189440]: 2025-12-11 14:23:35.932 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:35 compute-0 nova_compute[189440]: 2025-12-11 14:23:35.996 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:37 compute-0 podman[250156]: 2025-12-11 14:23:37.570185067 +0000 UTC m=+0.153203832 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true)
Dec 11 14:23:40 compute-0 nova_compute[189440]: 2025-12-11 14:23:40.934 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:41 compute-0 nova_compute[189440]: 2025-12-11 14:23:40.999 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:41 compute-0 podman[250183]: 2025-12-11 14:23:41.472897985 +0000 UTC m=+0.069708468 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:23:41 compute-0 podman[250182]: 2025-12-11 14:23:41.478537133 +0000 UTC m=+0.081307582 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41)
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.988 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.989 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81d60>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:23:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:23:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:23:45 compute-0 nova_compute[189440]: 2025-12-11 14:23:45.936 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:46 compute-0 nova_compute[189440]: 2025-12-11 14:23:46.002 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:50 compute-0 nova_compute[189440]: 2025-12-11 14:23:50.939 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:51 compute-0 nova_compute[189440]: 2025-12-11 14:23:51.004 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:54 compute-0 podman[250229]: 2025-12-11 14:23:54.505321631 +0000 UTC m=+0.100925712 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:23:55 compute-0 nova_compute[189440]: 2025-12-11 14:23:55.941 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:56 compute-0 nova_compute[189440]: 2025-12-11 14:23:56.007 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:23:56 compute-0 podman[250252]: 2025-12-11 14:23:56.466330858 +0000 UTC m=+0.062356798 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, config_id=multipathd)
Dec 11 14:23:59 compute-0 podman[203650]: time="2025-12-11T14:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:23:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:23:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 11 14:24:00 compute-0 podman[250270]: 2025-12-11 14:24:00.468408737 +0000 UTC m=+0.069257486 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Dec 11 14:24:00 compute-0 nova_compute[189440]: 2025-12-11 14:24:00.944 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:01 compute-0 nova_compute[189440]: 2025-12-11 14:24:01.009 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: ERROR   14:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: ERROR   14:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: ERROR   14:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: ERROR   14:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: ERROR   14:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:24:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:24:02 compute-0 nova_compute[189440]: 2025-12-11 14:24:02.252 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:02 compute-0 podman[250291]: 2025-12-11 14:24:02.526563491 +0000 UTC m=+0.108717473 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:24:02 compute-0 podman[250292]: 2025-12-11 14:24:02.537395036 +0000 UTC m=+0.116861252 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:24:02 compute-0 podman[250290]: 2025-12-11 14:24:02.537442347 +0000 UTC m=+0.118326218 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:24:03 compute-0 nova_compute[189440]: 2025-12-11 14:24:03.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:24:04.105 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:24:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:24:04.106 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:24:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:24:04.106 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:24:04 compute-0 nova_compute[189440]: 2025-12-11 14:24:04.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:04 compute-0 nova_compute[189440]: 2025-12-11 14:24:04.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:24:05 compute-0 nova_compute[189440]: 2025-12-11 14:24:05.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:05 compute-0 nova_compute[189440]: 2025-12-11 14:24:05.948 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:06 compute-0 nova_compute[189440]: 2025-12-11 14:24:06.011 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:08 compute-0 nova_compute[189440]: 2025-12-11 14:24:08.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:08 compute-0 nova_compute[189440]: 2025-12-11 14:24:08.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:24:08 compute-0 nova_compute[189440]: 2025-12-11 14:24:08.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:24:08 compute-0 nova_compute[189440]: 2025-12-11 14:24:08.268 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:24:08 compute-0 podman[250346]: 2025-12-11 14:24:08.578837961 +0000 UTC m=+0.174580165 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:24:10 compute-0 nova_compute[189440]: 2025-12-11 14:24:10.950 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:11 compute-0 nova_compute[189440]: 2025-12-11 14:24:11.014 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:11 compute-0 nova_compute[189440]: 2025-12-11 14:24:11.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.267 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.268 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.268 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.269 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:24:12 compute-0 podman[250372]: 2025-12-11 14:24:12.559035085 +0000 UTC m=+0.128116177 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Dec 11 14:24:12 compute-0 podman[250373]: 2025-12-11 14:24:12.570547548 +0000 UTC m=+0.135422977 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.746 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.747 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5385MB free_disk=72.36849212646484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.748 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:24:12 compute-0 nova_compute[189440]: 2025-12-11 14:24:12.748 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:24:13 compute-0 nova_compute[189440]: 2025-12-11 14:24:13.023 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:24:13 compute-0 nova_compute[189440]: 2025-12-11 14:24:13.024 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:24:13 compute-0 nova_compute[189440]: 2025-12-11 14:24:13.192 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:24:13 compute-0 nova_compute[189440]: 2025-12-11 14:24:13.211 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:24:13 compute-0 nova_compute[189440]: 2025-12-11 14:24:13.213 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:24:13 compute-0 nova_compute[189440]: 2025-12-11 14:24:13.214 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:24:14 compute-0 nova_compute[189440]: 2025-12-11 14:24:14.209 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:14 compute-0 nova_compute[189440]: 2025-12-11 14:24:14.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:24:15 compute-0 nova_compute[189440]: 2025-12-11 14:24:15.953 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:16 compute-0 nova_compute[189440]: 2025-12-11 14:24:16.017 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:20 compute-0 nova_compute[189440]: 2025-12-11 14:24:20.956 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:21 compute-0 nova_compute[189440]: 2025-12-11 14:24:21.019 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:25 compute-0 podman[250419]: 2025-12-11 14:24:25.475437127 +0000 UTC m=+0.069498063 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:24:25 compute-0 nova_compute[189440]: 2025-12-11 14:24:25.958 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:26 compute-0 nova_compute[189440]: 2025-12-11 14:24:26.022 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:27 compute-0 podman[250443]: 2025-12-11 14:24:27.473440408 +0000 UTC m=+0.078883133 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 14:24:29 compute-0 podman[203650]: time="2025-12-11T14:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:24:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:24:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Dec 11 14:24:30 compute-0 nova_compute[189440]: 2025-12-11 14:24:30.960 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:31 compute-0 nova_compute[189440]: 2025-12-11 14:24:31.025 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: ERROR   14:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: ERROR   14:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: ERROR   14:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: ERROR   14:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: ERROR   14:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:24:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:24:31 compute-0 podman[250463]: 2025-12-11 14:24:31.521617047 +0000 UTC m=+0.100099562 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:24:33 compute-0 podman[250482]: 2025-12-11 14:24:33.499461222 +0000 UTC m=+0.097756763 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:24:33 compute-0 podman[250484]: 2025-12-11 14:24:33.510136684 +0000 UTC m=+0.089497832 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 14:24:33 compute-0 podman[250483]: 2025-12-11 14:24:33.519043302 +0000 UTC m=+0.111134431 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=)
Dec 11 14:24:35 compute-0 nova_compute[189440]: 2025-12-11 14:24:35.963 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:36 compute-0 nova_compute[189440]: 2025-12-11 14:24:36.027 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:39 compute-0 podman[250536]: 2025-12-11 14:24:39.534723555 +0000 UTC m=+0.138653535 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:24:40 compute-0 nova_compute[189440]: 2025-12-11 14:24:40.965 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:41 compute-0 nova_compute[189440]: 2025-12-11 14:24:41.029 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:43 compute-0 podman[250562]: 2025-12-11 14:24:43.468265147 +0000 UTC m=+0.062627663 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:24:43 compute-0 podman[250561]: 2025-12-11 14:24:43.48060273 +0000 UTC m=+0.076658768 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7)
Dec 11 14:24:45 compute-0 nova_compute[189440]: 2025-12-11 14:24:45.967 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:46 compute-0 nova_compute[189440]: 2025-12-11 14:24:46.032 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:50 compute-0 nova_compute[189440]: 2025-12-11 14:24:50.970 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:51 compute-0 nova_compute[189440]: 2025-12-11 14:24:51.034 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:55 compute-0 nova_compute[189440]: 2025-12-11 14:24:55.971 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:56 compute-0 nova_compute[189440]: 2025-12-11 14:24:56.037 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:24:56 compute-0 podman[250605]: 2025-12-11 14:24:56.475548095 +0000 UTC m=+0.078558314 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:24:57 compute-0 podman[250631]: 2025-12-11 14:24:57.83117806 +0000 UTC m=+0.119571438 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 11 14:24:59 compute-0 podman[203650]: time="2025-12-11T14:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:24:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:24:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 11 14:25:00 compute-0 nova_compute[189440]: 2025-12-11 14:25:00.974 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:01 compute-0 nova_compute[189440]: 2025-12-11 14:25:01.039 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: ERROR   14:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: ERROR   14:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: ERROR   14:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: ERROR   14:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: ERROR   14:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:25:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:25:02 compute-0 nova_compute[189440]: 2025-12-11 14:25:02.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:02 compute-0 podman[250650]: 2025-12-11 14:25:02.481453559 +0000 UTC m=+0.079527578 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 11 14:25:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:25:04.106 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:25:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:25:04.107 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:25:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:25:04.107 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:25:04 compute-0 podman[250671]: 2025-12-11 14:25:04.489200709 +0000 UTC m=+0.079844935 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:25:04 compute-0 podman[250669]: 2025-12-11 14:25:04.492981301 +0000 UTC m=+0.085580506 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:25:04 compute-0 podman[250670]: 2025-12-11 14:25:04.516130618 +0000 UTC m=+0.107954453 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, container_name=kepler)
Dec 11 14:25:05 compute-0 nova_compute[189440]: 2025-12-11 14:25:05.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:05 compute-0 nova_compute[189440]: 2025-12-11 14:25:05.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:05 compute-0 nova_compute[189440]: 2025-12-11 14:25:05.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:25:05 compute-0 nova_compute[189440]: 2025-12-11 14:25:05.978 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:06 compute-0 nova_compute[189440]: 2025-12-11 14:25:06.043 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:07 compute-0 nova_compute[189440]: 2025-12-11 14:25:07.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:10 compute-0 nova_compute[189440]: 2025-12-11 14:25:10.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:10 compute-0 nova_compute[189440]: 2025-12-11 14:25:10.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:25:10 compute-0 nova_compute[189440]: 2025-12-11 14:25:10.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:25:10 compute-0 nova_compute[189440]: 2025-12-11 14:25:10.250 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:25:10 compute-0 podman[250725]: 2025-12-11 14:25:10.549004842 +0000 UTC m=+0.134798492 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:25:10 compute-0 nova_compute[189440]: 2025-12-11 14:25:10.981 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:11 compute-0 nova_compute[189440]: 2025-12-11 14:25:11.045 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:13 compute-0 nova_compute[189440]: 2025-12-11 14:25:13.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:13 compute-0 nova_compute[189440]: 2025-12-11 14:25:13.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.261 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.262 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.263 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.263 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:25:14 compute-0 podman[250750]: 2025-12-11 14:25:14.478178018 +0000 UTC m=+0.080780819 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec 11 14:25:14 compute-0 podman[250751]: 2025-12-11 14:25:14.486657045 +0000 UTC m=+0.077417285 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.621 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.622 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5389MB free_disk=72.36855697631836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.622 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.622 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.690 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.691 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.819 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.839 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.840 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:25:14 compute-0 nova_compute[189440]: 2025-12-11 14:25:14.840 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:25:15 compute-0 nova_compute[189440]: 2025-12-11 14:25:15.983 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:16 compute-0 nova_compute[189440]: 2025-12-11 14:25:16.048 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:16 compute-0 nova_compute[189440]: 2025-12-11 14:25:16.836 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:16 compute-0 nova_compute[189440]: 2025-12-11 14:25:16.891 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:25:20 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:25:20.735 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:25:20 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:25:20.735 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:25:20 compute-0 nova_compute[189440]: 2025-12-11 14:25:20.736 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:20 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:25:20.738 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:25:20 compute-0 nova_compute[189440]: 2025-12-11 14:25:20.985 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:21 compute-0 nova_compute[189440]: 2025-12-11 14:25:21.050 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:25 compute-0 nova_compute[189440]: 2025-12-11 14:25:25.988 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:26 compute-0 nova_compute[189440]: 2025-12-11 14:25:26.052 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:27 compute-0 podman[250794]: 2025-12-11 14:25:27.472243101 +0000 UTC m=+0.069512392 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:25:28 compute-0 podman[250817]: 2025-12-11 14:25:28.473650337 +0000 UTC m=+0.077245622 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 11 14:25:29 compute-0 podman[203650]: time="2025-12-11T14:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:25:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:25:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec 11 14:25:30 compute-0 nova_compute[189440]: 2025-12-11 14:25:30.990 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:31 compute-0 nova_compute[189440]: 2025-12-11 14:25:31.054 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: ERROR   14:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: ERROR   14:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: ERROR   14:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: ERROR   14:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: ERROR   14:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:25:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:25:33 compute-0 podman[250837]: 2025-12-11 14:25:33.490274503 +0000 UTC m=+0.086441518 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:25:35 compute-0 podman[250858]: 2025-12-11 14:25:35.512129458 +0000 UTC m=+0.091220624 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210)
Dec 11 14:25:35 compute-0 podman[250857]: 2025-12-11 14:25:35.522960782 +0000 UTC m=+0.110788093 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container)
Dec 11 14:25:35 compute-0 podman[250856]: 2025-12-11 14:25:35.526933649 +0000 UTC m=+0.112336940 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:25:35 compute-0 nova_compute[189440]: 2025-12-11 14:25:35.993 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:36 compute-0 nova_compute[189440]: 2025-12-11 14:25:36.055 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:40 compute-0 nova_compute[189440]: 2025-12-11 14:25:40.995 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:41 compute-0 nova_compute[189440]: 2025-12-11 14:25:41.058 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:41 compute-0 podman[250914]: 2025-12-11 14:25:41.510651181 +0000 UTC m=+0.106952159 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.989 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.989 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'cpu': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:25:43.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:25:44 compute-0 podman[250942]: 2025-12-11 14:25:44.766507693 +0000 UTC m=+0.062580692 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:25:44 compute-0 podman[250941]: 2025-12-11 14:25:44.77373375 +0000 UTC m=+0.074301220 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350)
Dec 11 14:25:45 compute-0 nova_compute[189440]: 2025-12-11 14:25:45.996 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:46 compute-0 nova_compute[189440]: 2025-12-11 14:25:46.060 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:50 compute-0 ovn_controller[97832]: 2025-12-11T14:25:50Z|00065|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec 11 14:25:50 compute-0 nova_compute[189440]: 2025-12-11 14:25:50.998 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:51 compute-0 nova_compute[189440]: 2025-12-11 14:25:51.062 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:56 compute-0 nova_compute[189440]: 2025-12-11 14:25:56.001 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:56 compute-0 nova_compute[189440]: 2025-12-11 14:25:56.064 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:25:58 compute-0 podman[250983]: 2025-12-11 14:25:58.482104991 +0000 UTC m=+0.076830862 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:25:58 compute-0 podman[251006]: 2025-12-11 14:25:58.612653296 +0000 UTC m=+0.095598020 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 11 14:25:59 compute-0 podman[203650]: time="2025-12-11T14:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:25:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:25:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 11 14:26:01 compute-0 nova_compute[189440]: 2025-12-11 14:26:01.002 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:01 compute-0 nova_compute[189440]: 2025-12-11 14:26:01.067 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: ERROR   14:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: ERROR   14:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: ERROR   14:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: ERROR   14:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: ERROR   14:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:26:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:26:02 compute-0 nova_compute[189440]: 2025-12-11 14:26:02.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:03 compute-0 nova_compute[189440]: 2025-12-11 14:26:03.173 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:04.108 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:04.109 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:04.109 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:04 compute-0 podman[251026]: 2025-12-11 14:26:04.526108548 +0000 UTC m=+0.114413722 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 11 14:26:04 compute-0 nova_compute[189440]: 2025-12-11 14:26:04.878 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:04 compute-0 nova_compute[189440]: 2025-12-11 14:26:04.966 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:06 compute-0 nova_compute[189440]: 2025-12-11 14:26:06.004 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:06 compute-0 nova_compute[189440]: 2025-12-11 14:26:06.069 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:06 compute-0 nova_compute[189440]: 2025-12-11 14:26:06.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:06 compute-0 nova_compute[189440]: 2025-12-11 14:26:06.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:26:06 compute-0 podman[251050]: 2025-12-11 14:26:06.518590274 +0000 UTC m=+0.097493948 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec 11 14:26:06 compute-0 podman[251049]: 2025-12-11 14:26:06.522945571 +0000 UTC m=+0.112285980 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 11 14:26:06 compute-0 podman[251048]: 2025-12-11 14:26:06.532533485 +0000 UTC m=+0.130189978 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 11 14:26:07 compute-0 nova_compute[189440]: 2025-12-11 14:26:07.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:07 compute-0 nova_compute[189440]: 2025-12-11 14:26:07.649 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:09 compute-0 nova_compute[189440]: 2025-12-11 14:26:09.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.008 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.072 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.125 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:26:11 compute-0 nova_compute[189440]: 2025-12-11 14:26:11.408 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:26:12 compute-0 podman[251106]: 2025-12-11 14:26:12.573335552 +0000 UTC m=+0.164813595 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 11 14:26:12 compute-0 nova_compute[189440]: 2025-12-11 14:26:12.702 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.514 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.514 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.515 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.515 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.868 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.869 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5373MB free_disk=72.36847305297852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.869 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:14 compute-0 nova_compute[189440]: 2025-12-11 14:26:14.870 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:15 compute-0 podman[251132]: 2025-12-11 14:26:15.507328567 +0000 UTC m=+0.106752774 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350)
Dec 11 14:26:15 compute-0 podman[251133]: 2025-12-11 14:26:15.545388438 +0000 UTC m=+0.126696323 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:26:16 compute-0 nova_compute[189440]: 2025-12-11 14:26:16.012 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:16 compute-0 nova_compute[189440]: 2025-12-11 14:26:16.074 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:21 compute-0 nova_compute[189440]: 2025-12-11 14:26:21.014 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:21 compute-0 nova_compute[189440]: 2025-12-11 14:26:21.077 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:23 compute-0 nova_compute[189440]: 2025-12-11 14:26:23.864 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:26:23 compute-0 nova_compute[189440]: 2025-12-11 14:26:23.864 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:26:23 compute-0 nova_compute[189440]: 2025-12-11 14:26:23.897 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:26:23 compute-0 nova_compute[189440]: 2025-12-11 14:26:23.961 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:26:23 compute-0 nova_compute[189440]: 2025-12-11 14:26:23.963 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:26:23 compute-0 nova_compute[189440]: 2025-12-11 14:26:23.964 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 9.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:24 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:24.157 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:26:24 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:24.159 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:26:24 compute-0 nova_compute[189440]: 2025-12-11 14:26:24.165 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:24 compute-0 nova_compute[189440]: 2025-12-11 14:26:24.965 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:24 compute-0 nova_compute[189440]: 2025-12-11 14:26:24.965 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:26:25 compute-0 nova_compute[189440]: 2025-12-11 14:26:25.200 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:26 compute-0 nova_compute[189440]: 2025-12-11 14:26:26.015 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:26 compute-0 nova_compute[189440]: 2025-12-11 14:26:26.080 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:26 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:26.161 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:27 compute-0 nova_compute[189440]: 2025-12-11 14:26:27.440 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:29 compute-0 podman[251178]: 2025-12-11 14:26:29.506503415 +0000 UTC m=+0.100360627 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 11 14:26:29 compute-0 podman[251179]: 2025-12-11 14:26:29.506488125 +0000 UTC m=+0.088034717 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:26:29 compute-0 podman[203650]: time="2025-12-11T14:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:26:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:26:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 11 14:26:30 compute-0 nova_compute[189440]: 2025-12-11 14:26:30.731 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "f64b46b2-b462-4f18-99a0-33cce11b70c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:30 compute-0 nova_compute[189440]: 2025-12-11 14:26:30.731 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:30 compute-0 nova_compute[189440]: 2025-12-11 14:26:30.752 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.018 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.082 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.344 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: ERROR   14:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: ERROR   14:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: ERROR   14:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: ERROR   14:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: ERROR   14:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:26:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.700 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.701 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.715 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:26:31 compute-0 nova_compute[189440]: 2025-12-11 14:26:31.716 189444 INFO nova.compute.claims [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.326 189444 DEBUG nova.compute.provider_tree [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.340 189444 DEBUG nova.scheduler.client.report [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.372 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.372 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.448 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.448 189444 DEBUG nova.network.neutron [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.471 189444 INFO nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.520 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.679 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.680 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.681 189444 INFO nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Creating image(s)#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.681 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "/var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.682 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "/var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.683 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "/var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.683 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "b9398531008bd76fff67b1480b858b505311524e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:32 compute-0 nova_compute[189440]: 2025-12-11 14:26:32.684 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:33 compute-0 nova_compute[189440]: 2025-12-11 14:26:33.054 189444 DEBUG nova.policy [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '719b5c4df50d474091f6f471803c8a13', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '16cfe265641045f6adca23a64917736e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.478 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:35 compute-0 podman[251222]: 2025-12-11 14:26:35.487524592 +0000 UTC m=+0.084764007 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.553 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.part --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.554 189444 DEBUG nova.virt.images [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] 64e29581-a774-4784-b0cb-b4428b3222f4 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.555 189444 DEBUG nova.privsep.utils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.556 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.part /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.880 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.part /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.converted" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.886 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.945 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.947 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:35 compute-0 nova_compute[189440]: 2025-12-11 14:26:35.961 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.020 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.021 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "b9398531008bd76fff67b1480b858b505311524e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.022 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.034 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.050 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.085 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.096 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.096 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.141 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.143 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.144 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.215 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.216 189444 DEBUG nova.virt.disk.api [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Checking if we can resize image /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.217 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.280 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.281 189444 DEBUG nova.virt.disk.api [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Cannot resize image /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.282 189444 DEBUG nova.objects.instance [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lazy-loading 'migration_context' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.329 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.329 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Ensure instance console log exists: /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.330 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.330 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.331 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:36 compute-0 nova_compute[189440]: 2025-12-11 14:26:36.633 189444 DEBUG nova.network.neutron [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Successfully created port: 38f9dcea-bf59-4044-812a-7bf30f595c5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 11 14:26:37 compute-0 podman[251269]: 2025-12-11 14:26:37.486456916 +0000 UTC m=+0.079331624 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:26:37 compute-0 podman[251270]: 2025-12-11 14:26:37.501888353 +0000 UTC m=+0.100362788 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, version=9.4, io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Dec 11 14:26:37 compute-0 podman[251271]: 2025-12-11 14:26:37.504439525 +0000 UTC m=+0.089043090 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4)
Dec 11 14:26:41 compute-0 nova_compute[189440]: 2025-12-11 14:26:41.024 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:41 compute-0 nova_compute[189440]: 2025-12-11 14:26:41.087 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:43 compute-0 nova_compute[189440]: 2025-12-11 14:26:43.024 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:43 compute-0 nova_compute[189440]: 2025-12-11 14:26:43.439 189444 DEBUG nova.network.neutron [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Successfully updated port: 38f9dcea-bf59-4044-812a-7bf30f595c5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:26:43 compute-0 podman[251322]: 2025-12-11 14:26:43.523942032 +0000 UTC m=+0.125289838 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:26:43 compute-0 nova_compute[189440]: 2025-12-11 14:26:43.629 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:26:43 compute-0 nova_compute[189440]: 2025-12-11 14:26:43.629 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:26:43 compute-0 nova_compute[189440]: 2025-12-11 14:26:43.630 189444 DEBUG nova.network.neutron [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:26:46 compute-0 nova_compute[189440]: 2025-12-11 14:26:46.036 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:46 compute-0 nova_compute[189440]: 2025-12-11 14:26:46.092 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:46 compute-0 podman[251349]: 2025-12-11 14:26:46.509338076 +0000 UTC m=+0.094855184 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:26:46 compute-0 podman[251348]: 2025-12-11 14:26:46.512716038 +0000 UTC m=+0.110243609 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec 11 14:26:47 compute-0 nova_compute[189440]: 2025-12-11 14:26:47.550 189444 DEBUG nova.network.neutron [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.183 189444 DEBUG nova.network.neutron [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updating instance_info_cache with network_info: [{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.207 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.207 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Instance network_info: |[{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.212 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Start _get_guest_xml network_info=[{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '64e29581-a774-4784-b0cb-b4428b3222f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.221 189444 WARNING nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.233 189444 DEBUG nova.virt.libvirt.host [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.234 189444 DEBUG nova.virt.libvirt.host [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.240 189444 DEBUG nova.virt.libvirt.host [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.240 189444 DEBUG nova.virt.libvirt.host [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.241 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.241 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:25:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='639c6f85-2c0f-4003-98b6-94c63eeb9fc7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.241 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.242 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.242 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.242 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.242 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.242 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.243 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.243 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.243 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.243 189444 DEBUG nova.virt.hardware [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.248 189444 DEBUG nova.virt.libvirt.vif [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1930571022',display_name='tempest-ServerAddressesTestJSON-server-1930571022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1930571022',id=6,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16cfe265641045f6adca23a64917736e',ramdisk_id='',reservation_id='r-peu9i05h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1021471966',owner_user_name='tempest-ServerAddressesTestJSON-1021471966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:26:32Z,user_data=None,user_id='719b5c4df50d474091f6f471803c8a13',uuid=f64b46b2-b462-4f18-99a0-33cce11b70c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.248 189444 DEBUG nova.network.os_vif_util [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Converting VIF {"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.249 189444 DEBUG nova.network.os_vif_util [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ef:3e,bridge_name='br-int',has_traffic_filtering=True,id=38f9dcea-bf59-4044-812a-7bf30f595c5c,network=Network(8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38f9dcea-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.250 189444 DEBUG nova.objects.instance [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lazy-loading 'pci_devices' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.264 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <uuid>f64b46b2-b462-4f18-99a0-33cce11b70c3</uuid>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <name>instance-00000006</name>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <memory>131072</memory>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:name>tempest-ServerAddressesTestJSON-server-1930571022</nova:name>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:26:50</nova:creationTime>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:flavor name="m1.nano">
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:memory>128</nova:memory>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:ephemeral>0</nova:ephemeral>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:user uuid="719b5c4df50d474091f6f471803c8a13">tempest-ServerAddressesTestJSON-1021471966-project-member</nova:user>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:project uuid="16cfe265641045f6adca23a64917736e">tempest-ServerAddressesTestJSON-1021471966</nova:project>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="64e29581-a774-4784-b0cb-b4428b3222f4"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        <nova:port uuid="38f9dcea-bf59-4044-812a-7bf30f595c5c">
Dec 11 14:26:50 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <system>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <entry name="serial">f64b46b2-b462-4f18-99a0-33cce11b70c3</entry>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <entry name="uuid">f64b46b2-b462-4f18-99a0-33cce11b70c3</entry>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </system>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <os>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </os>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <features>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </features>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.config"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:f3:ef:3e"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <target dev="tap38f9dcea-bf"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/console.log" append="off"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <video>
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </video>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:26:50 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:26:50 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:26:50 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:26:50 compute-0 nova_compute[189440]: </domain>
Dec 11 14:26:50 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.265 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Preparing to wait for external event network-vif-plugged-38f9dcea-bf59-4044-812a-7bf30f595c5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.265 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Acquiring lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.265 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.265 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.266 189444 DEBUG nova.virt.libvirt.vif [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1930571022',display_name='tempest-ServerAddressesTestJSON-server-1930571022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1930571022',id=6,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16cfe265641045f6adca23a64917736e',ramdisk_id='',reservation_id='r-peu9i05h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1021471966',owner_user_name='tempest-ServerAddressesTestJSON-1021471966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:26:32Z,user_data=None,user_id='719b5c4df50d474091f6f471803c8a13',uuid=f64b46b2-b462-4f18-99a0-33cce11b70c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.266 189444 DEBUG nova.network.os_vif_util [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Converting VIF {"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.267 189444 DEBUG nova.network.os_vif_util [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ef:3e,bridge_name='br-int',has_traffic_filtering=True,id=38f9dcea-bf59-4044-812a-7bf30f595c5c,network=Network(8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38f9dcea-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.267 189444 DEBUG os_vif [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ef:3e,bridge_name='br-int',has_traffic_filtering=True,id=38f9dcea-bf59-4044-812a-7bf30f595c5c,network=Network(8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38f9dcea-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.268 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.268 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.268 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.272 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.272 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38f9dcea-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.272 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap38f9dcea-bf, col_values=(('external_ids', {'iface-id': '38f9dcea-bf59-4044-812a-7bf30f595c5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:ef:3e', 'vm-uuid': 'f64b46b2-b462-4f18-99a0-33cce11b70c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.274 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:50 compute-0 NetworkManager[56353]: <info>  [1765463210.2768] manager: (tap38f9dcea-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.277 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.289 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.290 189444 INFO os_vif [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ef:3e,bridge_name='br-int',has_traffic_filtering=True,id=38f9dcea-bf59-4044-812a-7bf30f595c5c,network=Network(8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38f9dcea-bf')#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.929 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.931 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.931 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] No VIF found with MAC fa:16:3e:f3:ef:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:26:50 compute-0 nova_compute[189440]: 2025-12-11 14:26:50.933 189444 INFO nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Using config drive#033[00m
Dec 11 14:26:51 compute-0 nova_compute[189440]: 2025-12-11 14:26:51.043 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:52 compute-0 nova_compute[189440]: 2025-12-11 14:26:52.814 189444 DEBUG nova.compute.manager [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Received event network-changed-38f9dcea-bf59-4044-812a-7bf30f595c5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:26:52 compute-0 nova_compute[189440]: 2025-12-11 14:26:52.815 189444 DEBUG nova.compute.manager [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Refreshing instance network info cache due to event network-changed-38f9dcea-bf59-4044-812a-7bf30f595c5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:26:52 compute-0 nova_compute[189440]: 2025-12-11 14:26:52.816 189444 DEBUG oslo_concurrency.lockutils [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:26:52 compute-0 nova_compute[189440]: 2025-12-11 14:26:52.816 189444 DEBUG oslo_concurrency.lockutils [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:26:52 compute-0 nova_compute[189440]: 2025-12-11 14:26:52.816 189444 DEBUG nova.network.neutron [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Refreshing network info cache for port 38f9dcea-bf59-4044-812a-7bf30f595c5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.111 189444 INFO nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Creating config drive at /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.config#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.118 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvt4hpmxy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.246 189444 DEBUG oslo_concurrency.processutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvt4hpmxy" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:26:53 compute-0 kernel: tap38f9dcea-bf: entered promiscuous mode
Dec 11 14:26:53 compute-0 ovn_controller[97832]: 2025-12-11T14:26:53Z|00066|binding|INFO|Claiming lport 38f9dcea-bf59-4044-812a-7bf30f595c5c for this chassis.
Dec 11 14:26:53 compute-0 ovn_controller[97832]: 2025-12-11T14:26:53Z|00067|binding|INFO|38f9dcea-bf59-4044-812a-7bf30f595c5c: Claiming fa:16:3e:f3:ef:3e 10.100.0.4
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.354 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 NetworkManager[56353]: <info>  [1765463213.3584] manager: (tap38f9dcea-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec 11 14:26:53 compute-0 ovn_controller[97832]: 2025-12-11T14:26:53Z|00068|binding|INFO|Setting lport 38f9dcea-bf59-4044-812a-7bf30f595c5c ovn-installed in OVS
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.372 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.386 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 systemd-machined[155778]: New machine qemu-6-instance-00000006.
Dec 11 14:26:53 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec 11 14:26:53 compute-0 systemd-udevd[251411]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:26:53 compute-0 NetworkManager[56353]: <info>  [1765463213.4677] device (tap38f9dcea-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:26:53 compute-0 NetworkManager[56353]: <info>  [1765463213.4685] device (tap38f9dcea-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:26:53 compute-0 ovn_controller[97832]: 2025-12-11T14:26:53Z|00069|binding|INFO|Setting lport 38f9dcea-bf59-4044-812a-7bf30f595c5c up in Southbound
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.473 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:ef:3e 10.100.0.4'], port_security=['fa:16:3e:f3:ef:3e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f64b46b2-b462-4f18-99a0-33cce11b70c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16cfe265641045f6adca23a64917736e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67ac4779-5ffc-4ded-9e81-b259ef8402ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56fe91c2-e6f9-45de-b3ac-c18f85a77884, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=38f9dcea-bf59-4044-812a-7bf30f595c5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.474 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 38f9dcea-bf59-4044-812a-7bf30f595c5c in datapath 8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0 bound to our chassis#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.477 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.494 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[b31f4a35-fc8c-4945-8d52-c28970decd80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.495 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a57e9b6-21 in ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.498 239832 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a57e9b6-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.498 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[78219d3a-7aa6-4de6-bef8-4fc68e0bdac7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.499 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7572054e-a752-4bb6-9aad-54d2af791983]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.515 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8e679e-b886-42c7-ab94-c936dc4a7282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.546 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7218ac74-1c79-467d-8fc9-6288b4a7f995]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.591 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[64b3f8ff-5e54-4ae5-b19f-332256888562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 NetworkManager[56353]: <info>  [1765463213.6022] manager: (tap8a57e9b6-20): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.600 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[8b00748f-f66e-43d3-a538-d67bf56b6d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.643 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[0413b989-6646-46fa-a7b8-7ae6be400b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.646 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6001d4-59e3-40b4-baf0-dd724bad72d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 NetworkManager[56353]: <info>  [1765463213.6776] device (tap8a57e9b6-20): carrier: link connected
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.688 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[220617dd-eb72-4c82-8504-dbd590ada53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.713 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[63600c1a-1722-4b5f-8836-4397dad7db41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a57e9b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:25:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527400, 'reachable_time': 39060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251444, 'error': None, 'target': 'ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.732 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[33d3d5e0-087e-4265-b713-262b71c7c25e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:25d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527400, 'tstamp': 527400}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251445, 'error': None, 'target': 'ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.757 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f6715208-8f06-41c0-845d-4343ec7c60ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a57e9b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:25:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527400, 'reachable_time': 39060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251446, 'error': None, 'target': 'ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.792 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[2236b274-1f35-4333-941a-3654c71781f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.881 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[e0512353-b110-4b41-910c-8d6e60ae6ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.883 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a57e9b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.883 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.884 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a57e9b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.887 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 kernel: tap8a57e9b6-20: entered promiscuous mode
Dec 11 14:26:53 compute-0 NetworkManager[56353]: <info>  [1765463213.8902] manager: (tap8a57e9b6-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.898 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.906 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a57e9b6-20, col_values=(('external_ids', {'iface-id': '33f7bdab-616d-48cf-a80b-a3a17467ce09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.909 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.911 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:53 compute-0 ovn_controller[97832]: 2025-12-11T14:26:53Z|00070|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.914 106686 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.915 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[b4eae262-ec2a-40af-9792-d2fe681a4858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.917 106686 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: global
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    log         /dev/log local0 debug
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    log-tag     haproxy-metadata-proxy-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    user        root
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    group       root
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    maxconn     1024
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    pidfile     /var/lib/neutron/external/pids/8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0.pid.haproxy
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    daemon
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: defaults
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    log global
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    mode http
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    option httplog
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    option dontlognull
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    option http-server-close
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    option forwardfor
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    retries                 3
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    timeout http-request    30s
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    timeout connect         30s
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    timeout client          32s
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    timeout server          32s
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    timeout http-keep-alive 30s
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: listen listener
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    bind 169.254.169.254:80
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    server metadata /var/lib/neutron/metadata_proxy
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]:    http-request add-header X-OVN-Network-ID 8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 11 14:26:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:26:53.917 106686 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0', 'env', 'PROCESS_TAG=haproxy-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 11 14:26:53 compute-0 nova_compute[189440]: 2025-12-11 14:26:53.934 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:54 compute-0 nova_compute[189440]: 2025-12-11 14:26:54.079 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463214.078295, f64b46b2-b462-4f18-99a0-33cce11b70c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:26:54 compute-0 nova_compute[189440]: 2025-12-11 14:26:54.080 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] VM Started (Lifecycle Event)#033[00m
Dec 11 14:26:54 compute-0 podman[251484]: 2025-12-11 14:26:54.343985451 +0000 UTC m=+0.043081777 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 14:26:55 compute-0 nova_compute[189440]: 2025-12-11 14:26:55.275 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:55 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 11 14:26:55 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 11 14:26:55 compute-0 nova_compute[189440]: 2025-12-11 14:26:55.632 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:26:55 compute-0 nova_compute[189440]: 2025-12-11 14:26:55.642 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463214.0784366, f64b46b2-b462-4f18-99a0-33cce11b70c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:26:55 compute-0 nova_compute[189440]: 2025-12-11 14:26:55.642 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:26:55 compute-0 nova_compute[189440]: 2025-12-11 14:26:55.873 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:26:55 compute-0 nova_compute[189440]: 2025-12-11 14:26:55.883 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:26:56 compute-0 nova_compute[189440]: 2025-12-11 14:26:56.044 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:26:56 compute-0 nova_compute[189440]: 2025-12-11 14:26:56.045 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:26:57 compute-0 nova_compute[189440]: 2025-12-11 14:26:57.793 189444 DEBUG nova.network.neutron [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updated VIF entry in instance network info cache for port 38f9dcea-bf59-4044-812a-7bf30f595c5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:26:57 compute-0 nova_compute[189440]: 2025-12-11 14:26:57.794 189444 DEBUG nova.network.neutron [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updating instance_info_cache with network_info: [{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.002 189444 DEBUG oslo_concurrency.lockutils [req-c971b8db-c18a-4b39-8c4d-9c9eb467c5ab req-11cbc765-b0c0-4710-9b20-8550350d2234 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.078 189444 DEBUG nova.compute.manager [req-eeb4c29e-5fc8-4421-b1bb-5267bfc31097 req-0dbf762f-e615-4999-af1a-5f4b1f4182bc a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Received event network-vif-plugged-38f9dcea-bf59-4044-812a-7bf30f595c5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.079 189444 DEBUG oslo_concurrency.lockutils [req-eeb4c29e-5fc8-4421-b1bb-5267bfc31097 req-0dbf762f-e615-4999-af1a-5f4b1f4182bc a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.080 189444 DEBUG oslo_concurrency.lockutils [req-eeb4c29e-5fc8-4421-b1bb-5267bfc31097 req-0dbf762f-e615-4999-af1a-5f4b1f4182bc a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.081 189444 DEBUG oslo_concurrency.lockutils [req-eeb4c29e-5fc8-4421-b1bb-5267bfc31097 req-0dbf762f-e615-4999-af1a-5f4b1f4182bc a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.082 189444 DEBUG nova.compute.manager [req-eeb4c29e-5fc8-4421-b1bb-5267bfc31097 req-0dbf762f-e615-4999-af1a-5f4b1f4182bc a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Processing event network-vif-plugged-38f9dcea-bf59-4044-812a-7bf30f595c5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.083 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.095 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463219.0925658, f64b46b2-b462-4f18-99a0-33cce11b70c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.096 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.098 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.106 189444 INFO nova.virt.libvirt.driver [-] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Instance spawned successfully.#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.107 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.420 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.427 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.427 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.428 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.428 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.429 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.430 189444 DEBUG nova.virt.libvirt.driver [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.434 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:26:59 compute-0 podman[203650]: time="2025-12-11T14:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:26:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 11 14:26:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 11 14:26:59 compute-0 nova_compute[189440]: 2025-12-11 14:26:59.823 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:27:00 compute-0 nova_compute[189440]: 2025-12-11 14:27:00.281 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:00 compute-0 podman[251484]: 2025-12-11 14:27:00.715233284 +0000 UTC m=+6.414329620 container create 8969c12b472eb9787039e783a82577b4bcfd6ad63802a9894d026edd52a1a087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 11 14:27:00 compute-0 podman[251516]: 2025-12-11 14:27:00.766443578 +0000 UTC m=+0.346193293 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:27:00 compute-0 podman[251517]: 2025-12-11 14:27:00.767650163 +0000 UTC m=+0.354445044 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:27:00 compute-0 systemd[1]: Started libpod-conmon-8969c12b472eb9787039e783a82577b4bcfd6ad63802a9894d026edd52a1a087.scope.
Dec 11 14:27:00 compute-0 systemd[1]: Started libcrun container.
Dec 11 14:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cd2927cb6927ac11c965baac735dd2f7d0183e0a59e44be4c7e8c16a07250c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 14:27:00 compute-0 podman[251484]: 2025-12-11 14:27:00.860352099 +0000 UTC m=+6.559448415 container init 8969c12b472eb9787039e783a82577b4bcfd6ad63802a9894d026edd52a1a087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 11 14:27:00 compute-0 podman[251484]: 2025-12-11 14:27:00.870733081 +0000 UTC m=+6.569829377 container start 8969c12b472eb9787039e783a82577b4bcfd6ad63802a9894d026edd52a1a087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 14:27:00 compute-0 neutron-haproxy-ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0[251555]: [NOTICE]   (251559) : New worker (251561) forked
Dec 11 14:27:00 compute-0 neutron-haproxy-ovnmeta-8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0[251555]: [NOTICE]   (251559) : Loading success.
Dec 11 14:27:01 compute-0 nova_compute[189440]: 2025-12-11 14:27:01.044 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:01 compute-0 nova_compute[189440]: 2025-12-11 14:27:01.112 189444 INFO nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Took 28.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:27:01 compute-0 nova_compute[189440]: 2025-12-11 14:27:01.112 189444 DEBUG nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: ERROR   14:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: ERROR   14:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: ERROR   14:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: ERROR   14:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: ERROR   14:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:27:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:27:03 compute-0 nova_compute[189440]: 2025-12-11 14:27:03.177 189444 INFO nova.compute.manager [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Took 31.52 seconds to build instance.#033[00m
Dec 11 14:27:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:27:04.109 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:27:04.110 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:27:04.111 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:04 compute-0 nova_compute[189440]: 2025-12-11 14:27:04.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:05 compute-0 nova_compute[189440]: 2025-12-11 14:27:05.284 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:05 compute-0 nova_compute[189440]: 2025-12-11 14:27:05.711 189444 DEBUG oslo_concurrency.lockutils [None req-afa62ee0-ccca-4cbc-9c8b-00b49c76f897 719b5c4df50d474091f6f471803c8a13 16cfe265641045f6adca23a64917736e - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:06 compute-0 nova_compute[189440]: 2025-12-11 14:27:06.046 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:06 compute-0 podman[251573]: 2025-12-11 14:27:06.490957139 +0000 UTC m=+0.091593193 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 11 14:27:08 compute-0 nova_compute[189440]: 2025-12-11 14:27:08.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:08 compute-0 nova_compute[189440]: 2025-12-11 14:27:08.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:27:08 compute-0 podman[251595]: 2025-12-11 14:27:08.499428056 +0000 UTC m=+0.081901851 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0)
Dec 11 14:27:08 compute-0 podman[251594]: 2025-12-11 14:27:08.52320874 +0000 UTC m=+0.109815925 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Dec 11 14:27:08 compute-0 podman[251593]: 2025-12-11 14:27:08.5252656 +0000 UTC m=+0.110032752 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 11 14:27:09 compute-0 nova_compute[189440]: 2025-12-11 14:27:09.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:09 compute-0 nova_compute[189440]: 2025-12-11 14:27:09.785 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:09 compute-0 nova_compute[189440]: 2025-12-11 14:27:09.786 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:10 compute-0 nova_compute[189440]: 2025-12-11 14:27:10.057 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:27:10 compute-0 nova_compute[189440]: 2025-12-11 14:27:10.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:10 compute-0 nova_compute[189440]: 2025-12-11 14:27:10.289 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:11 compute-0 nova_compute[189440]: 2025-12-11 14:27:11.051 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:11 compute-0 nova_compute[189440]: 2025-12-11 14:27:11.142 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:11 compute-0 nova_compute[189440]: 2025-12-11 14:27:11.142 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:11 compute-0 nova_compute[189440]: 2025-12-11 14:27:11.202 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:27:11 compute-0 nova_compute[189440]: 2025-12-11 14:27:11.203 189444 INFO nova.compute.claims [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:27:12 compute-0 nova_compute[189440]: 2025-12-11 14:27:12.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:12 compute-0 nova_compute[189440]: 2025-12-11 14:27:12.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:27:12 compute-0 nova_compute[189440]: 2025-12-11 14:27:12.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:27:14 compute-0 podman[251649]: 2025-12-11 14:27:14.541850884 +0000 UTC m=+0.142009045 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 11 14:27:15 compute-0 nova_compute[189440]: 2025-12-11 14:27:15.291 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:16 compute-0 nova_compute[189440]: 2025-12-11 14:27:16.056 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:16 compute-0 nova_compute[189440]: 2025-12-11 14:27:16.176 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 11 14:27:16 compute-0 nova_compute[189440]: 2025-12-11 14:27:16.202 189444 DEBUG nova.compute.provider_tree [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.143 189444 DEBUG nova.compute.manager [req-4a751b32-2305-4463-9a0c-4adf9dec6f0c req-1185238a-f8f7-4c77-9ff8-af200304c05f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Received event network-vif-plugged-38f9dcea-bf59-4044-812a-7bf30f595c5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.144 189444 DEBUG oslo_concurrency.lockutils [req-4a751b32-2305-4463-9a0c-4adf9dec6f0c req-1185238a-f8f7-4c77-9ff8-af200304c05f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.146 189444 DEBUG oslo_concurrency.lockutils [req-4a751b32-2305-4463-9a0c-4adf9dec6f0c req-1185238a-f8f7-4c77-9ff8-af200304c05f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.147 189444 DEBUG oslo_concurrency.lockutils [req-4a751b32-2305-4463-9a0c-4adf9dec6f0c req-1185238a-f8f7-4c77-9ff8-af200304c05f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.147 189444 DEBUG nova.compute.manager [req-4a751b32-2305-4463-9a0c-4adf9dec6f0c req-1185238a-f8f7-4c77-9ff8-af200304c05f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] No waiting events found dispatching network-vif-plugged-38f9dcea-bf59-4044-812a-7bf30f595c5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.148 189444 WARNING nova.compute.manager [req-4a751b32-2305-4463-9a0c-4adf9dec6f0c req-1185238a-f8f7-4c77-9ff8-af200304c05f a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Received unexpected event network-vif-plugged-38f9dcea-bf59-4044-812a-7bf30f595c5c for instance with vm_state active and task_state None.#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.226 189444 DEBUG nova.scheduler.client.report [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.380 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 6.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.382 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.519 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.519 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.520 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:27:17 compute-0 nova_compute[189440]: 2025-12-11 14:27:17.520 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:27:17 compute-0 podman[251675]: 2025-12-11 14:27:17.530516723 +0000 UTC m=+0.119125958 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Dec 11 14:27:17 compute-0 podman[251676]: 2025-12-11 14:27:17.534834499 +0000 UTC m=+0.118281663 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:27:20 compute-0 nova_compute[189440]: 2025-12-11 14:27:20.294 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:20 compute-0 nova_compute[189440]: 2025-12-11 14:27:20.328 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:27:20 compute-0 nova_compute[189440]: 2025-12-11 14:27:20.328 189444 DEBUG nova.network.neutron [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:27:21 compute-0 nova_compute[189440]: 2025-12-11 14:27:21.060 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:21 compute-0 nova_compute[189440]: 2025-12-11 14:27:21.085 189444 DEBUG nova.policy [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a714564f83e74b39aa33b964e9913421', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0f7c7a5f01c4c7a9fd2fa3668dcd463', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 11 14:27:22 compute-0 nova_compute[189440]: 2025-12-11 14:27:22.415 189444 INFO nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:27:25 compute-0 nova_compute[189440]: 2025-12-11 14:27:25.299 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:26 compute-0 ovn_controller[97832]: 2025-12-11T14:27:26Z|00071|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Dec 11 14:27:26 compute-0 nova_compute[189440]: 2025-12-11 14:27:26.064 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:29 compute-0 podman[203650]: time="2025-12-11T14:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:27:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:27:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec 11 14:27:30 compute-0 nova_compute[189440]: 2025-12-11 14:27:30.304 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:31 compute-0 nova_compute[189440]: 2025-12-11 14:27:31.068 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:31 compute-0 nova_compute[189440]: 2025-12-11 14:27:31.248 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: ERROR   14:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: ERROR   14:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: ERROR   14:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: ERROR   14:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: ERROR   14:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:27:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:27:31 compute-0 podman[251715]: 2025-12-11 14:27:31.493017006 +0000 UTC m=+0.083564299 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:27:31 compute-0 podman[251716]: 2025-12-11 14:27:31.5360114 +0000 UTC m=+0.112146633 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:27:34 compute-0 ovn_controller[97832]: 2025-12-11T14:27:34Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:ef:3e 10.100.0.4
Dec 11 14:27:34 compute-0 ovn_controller[97832]: 2025-12-11T14:27:34Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:ef:3e 10.100.0.4
Dec 11 14:27:35 compute-0 nova_compute[189440]: 2025-12-11 14:27:35.308 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:36 compute-0 nova_compute[189440]: 2025-12-11 14:27:36.072 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:37 compute-0 podman[251770]: 2025-12-11 14:27:37.518363323 +0000 UTC m=+0.111538006 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 14:27:39 compute-0 podman[251791]: 2025-12-11 14:27:39.488541424 +0000 UTC m=+0.079318415 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 11 14:27:39 compute-0 podman[251789]: 2025-12-11 14:27:39.505408566 +0000 UTC m=+0.101371769 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 11 14:27:39 compute-0 podman[251790]: 2025-12-11 14:27:39.528753978 +0000 UTC m=+0.122738203 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec 11 14:27:40 compute-0 nova_compute[189440]: 2025-12-11 14:27:40.312 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:41 compute-0 nova_compute[189440]: 2025-12-11 14:27:41.075 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.992 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.995 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.004 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f64b46b2-b462-4f18-99a0-33cce11b70c3 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:27:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:43.006 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f64b46b2-b462-4f18-99a0-33cce11b70c3 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:27:44 compute-0 podman[251844]: 2025-12-11 14:27:44.819419165 +0000 UTC m=+0.145912930 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 11 14:27:45 compute-0 nova_compute[189440]: 2025-12-11 14:27:45.316 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:46 compute-0 nova_compute[189440]: 2025-12-11 14:27:46.081 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:46 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:27:46.275 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:27:46 compute-0 nova_compute[189440]: 2025-12-11 14:27:46.276 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:46 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:27:46.277 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.351 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.354 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.355 189444 INFO nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Creating image(s)#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.357 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "/var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.358 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "/var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.360 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "/var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.393 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.455 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.457 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "b9398531008bd76fff67b1480b858b505311524e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.459 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.483 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.545 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.547 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.596 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.597 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.598 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.661 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.663 189444 DEBUG nova.virt.disk.api [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Checking if we can resize image /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.664 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.729 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.730 189444 DEBUG nova.virt.disk.api [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Cannot resize image /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:27:47 compute-0 nova_compute[189440]: 2025-12-11 14:27:47.731 189444 DEBUG nova.objects.instance [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:27:48 compute-0 podman[251888]: 2025-12-11 14:27:48.485910541 +0000 UTC m=+0.082309983 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:27:48 compute-0 podman[251887]: 2025-12-11 14:27:48.519865402 +0000 UTC m=+0.109385733 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7)
Dec 11 14:27:48 compute-0 nova_compute[189440]: 2025-12-11 14:27:48.757 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:27:48 compute-0 nova_compute[189440]: 2025-12-11 14:27:48.758 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Ensure instance console log exists: /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:27:48 compute-0 nova_compute[189440]: 2025-12-11 14:27:48.759 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:27:48 compute-0 nova_compute[189440]: 2025-12-11 14:27:48.759 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:27:48 compute-0 nova_compute[189440]: 2025-12-11 14:27:48.760 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:27:50 compute-0 nova_compute[189440]: 2025-12-11 14:27:50.318 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:51 compute-0 nova_compute[189440]: 2025-12-11 14:27:51.083 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:27:52.282 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.734 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1813 Content-Type: application/json Date: Thu, 11 Dec 2025 14:27:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e54ae9b4-a60e-4be5-ac5f-890781272534 x-openstack-request-id: req-e54ae9b4-a60e-4be5-ac5f-890781272534 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.735 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f64b46b2-b462-4f18-99a0-33cce11b70c3", "name": "tempest-ServerAddressesTestJSON-server-1930571022", "status": "ACTIVE", "tenant_id": "16cfe265641045f6adca23a64917736e", "user_id": "719b5c4df50d474091f6f471803c8a13", "metadata": {}, "hostId": "2fcddfdd3b298ab69316782a145f6113cf5f677ad9bc894793473b66", "image": {"id": "64e29581-a774-4784-b0cb-b4428b3222f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/64e29581-a774-4784-b0cb-b4428b3222f4"}]}, "flavor": {"id": "639c6f85-2c0f-4003-98b6-94c63eeb9fc7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/639c6f85-2c0f-4003-98b6-94c63eeb9fc7"}]}, "created": "2025-12-11T14:26:27Z", "updated": "2025-12-11T14:27:01Z", "addresses": {"tempest-ServerAddressesTestJSON-2142628490-network": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:ef:3e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f64b46b2-b462-4f18-99a0-33cce11b70c3"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f64b46b2-b462-4f18-99a0-33cce11b70c3"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-11T14:27:01.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.735 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f64b46b2-b462-4f18-99a0-33cce11b70c3 used request id req-e54ae9b4-a60e-4be5-ac5f-890781272534 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.736 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64b46b2-b462-4f18-99a0-33cce11b70c3', 'name': 'tempest-ServerAddressesTestJSON-server-1930571022', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16cfe265641045f6adca23a64917736e', 'user_id': '719b5c4df50d474091f6f471803c8a13', 'hostId': '2fcddfdd3b298ab69316782a145f6113cf5f677ad9bc894793473b66', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:27:54.737416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.744 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f64b46b2-b462-4f18-99a0-33cce11b70c3 / tap38f9dcea-bf inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.745 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.746 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.747 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:27:54.747149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.774 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/cpu volume: 34760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.775 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.775 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.775 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.775 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:27:54.775600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.793 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.794 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.795 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/memory.usage volume: 42.56640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.796 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.797 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:27:54.794947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.797 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:27:54.795870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.797 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.798 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.798 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.799 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.800 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-1930571022>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-1930571022>]
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.800 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:27:54.797253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-11T14:27:54.799663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.801 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:27:54.801412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.802 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.802 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.803 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:27:54.803111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.803 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.804 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.804 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.804 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.804 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.805 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:27:54.804480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.806 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.807 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:27:54.806470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.807 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.808 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.808 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.808 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:27:54.808244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.855 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 715818456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.855 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 141083317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.856 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.857 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.857 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:27:54.856941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.858 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:27:54.858691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.859 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.860 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 72892416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.861 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:27:54.860566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.862 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:27:54.862601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.863 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 10551527670 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.863 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.863 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.864 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.865 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.866 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:27:54.864749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.866 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.867 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:27:54.866579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.868 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.869 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:27:54.868968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.870 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-11T14:27:54.870695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.871 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.871 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-1930571022>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-1930571022>]
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.871 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:27:54.872260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.873 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.874 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.875 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:27:54.873883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.876 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:27:54.876491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.877 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.877 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.878 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.879 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:27:54.878205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:27:54.879834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.879 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.880 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.881 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.881 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.881 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.881 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.882 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:27:54.881681) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.883 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.883 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:54 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:27:54.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:27:55 compute-0 nova_compute[189440]: 2025-12-11 14:27:55.323 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.087 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.199 189444 DEBUG neutronclient.v2_0.client [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Error message: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] An error occurred while refreshing the network cache.: neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Dec 11 14:27:56 compute-0 nova_compute[189440]: The Keystone service is temporarily unavailable.
Dec 11 14:27:56 compute-0 nova_compute[189440]: 
Dec 11 14:27:56 compute-0 nova_compute[189440]: 
Dec 11 14:27:56 compute-0 nova_compute[189440]: Neutron server returns request_ids: ['req-6b80b476-4131-4286-8d2a-af70e3b4df23']
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Traceback (most recent call last):
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9927, in _heal_instance_info_cache
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     self.network_api.get_instance_nw_info(
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1990, in get_instance_nw_info
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     result = self._get_instance_nw_info(context, instance, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2016, in _get_instance_nw_info
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     nw_info = self._build_network_info_model(context, instance, networks,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 3534, in _build_network_info_model
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     vif = self._build_vif_model(
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 3353, in _build_vif_model
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     subnets = self._nw_info_get_subnets(context,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 3238, in _nw_info_get_subnets
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     subnets = self._get_subnets_from_port(context, port, client)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 3623, in _get_subnets_from_port
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     data = client.list_ports(**dhcp_search_opts)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     ret = obj(*args, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 815, in list_ports
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     return self.list('ports', self.ports_path, retrieve_all,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     ret = obj(*args, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     for r in self._pagination(collection, path, **params):
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     res = self.get(path, params=params)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     ret = obj(*args, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     return self.retry_request("GET", action, body=body,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     ret = obj(*args, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     return self.do_request(method, action, body=body,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     ret = obj(*args, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     self._handle_fault_response(status_code, replybody, resp)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     ret = obj(*args, **kwargs)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     exception_handler_v20(status_code, error_body)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3]     raise client_exc(message=error_message,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] The Keystone service is temporarily unavailable.
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] 
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] 
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Neutron server returns request_ids: ['req-6b80b476-4131-4286-8d2a-af70e3b4df23']
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.200 189444 ERROR nova.compute.manager [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] #033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.206 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.206 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.206 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 11 14:27:56 compute-0 nova_compute[189440]: [SQL: SELECT 1]
Dec 11 14:27:56 compute-0 nova_compute[189440]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 11 14:27:56 compute-0 nova_compute[189440]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openst
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     task(self, context)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task     raise result
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 11 14:27:56 compute-0 rsyslogd[236802]: message too long (14444) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task [SQL: SELECT 1]
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.Oper
Dec 11 14:27:56 compute-0 nova_compute[189440]: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task #033[00m
Dec 11 14:27:56 compute-0 rsyslogd[236802]: message too long (14508) with configured size 8096, begin of message is: 2025-12-11 14:27:56.381 189444 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.378 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.379 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Error during ComputeManager._sync_scheduler_instance_info: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 11 14:27:59 compute-0 nova_compute[189440]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 11 14:27:59 compute-0 nova_compute[189440]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n'
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     task(self, context)
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2236, in _sync_scheduler_instance_info
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_host(context, self.host,
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task     raise result
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py"
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task #033[00m
Dec 11 14:27:59 compute-0 rsyslogd[236802]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:27:59 compute-0 rsyslogd[236802]: message too long (8622) with configured size 8096, begin of message is: 2025-12-11 14:27:59.452 189444 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 11 14:27:59 compute-0 nova_compute[189440]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 11 14:27:59 compute-0 nova_compute[189440]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db     raise result
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Dec 11 14:27:59 compute-0 nova_compute[189440]: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db #033[00m
Dec 11 14:27:59 compute-0 rsyslogd[236802]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:27:59 compute-0 rsyslogd[236802]: message too long (9052) with configured size 8096, begin of message is: 2025-12-11 14:27:59.624 189444 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 11 14:27:59 compute-0 podman[203650]: time="2025-12-11T14:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:27:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29525 "" "Go-http-client/1.1"
Dec 11 14:27:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec 11 14:28:00 compute-0 nova_compute[189440]: 2025-12-11 14:28:00.328 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:01 compute-0 nova_compute[189440]: 2025-12-11 14:28:01.089 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: ERROR   14:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: ERROR   14:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: ERROR   14:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: ERROR   14:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: ERROR   14:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:28:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:28:02 compute-0 podman[251933]: 2025-12-11 14:28:02.489615453 +0000 UTC m=+0.094153808 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 11 14:28:02 compute-0 podman[251934]: 2025-12-11 14:28:02.496695369 +0000 UTC m=+0.093563431 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:28:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:04.110 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:04.111 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:04.112 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:04 compute-0 nova_compute[189440]: 2025-12-11 14:28:04.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:04 compute-0 ovn_controller[97832]: 2025-12-11T14:28:04Z|00072|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:28:04 compute-0 nova_compute[189440]: 2025-12-11 14:28:04.720 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:04 compute-0 ovn_controller[97832]: 2025-12-11T14:28:04Z|00073|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:28:04 compute-0 nova_compute[189440]: 2025-12-11 14:28:04.970 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:05 compute-0 nova_compute[189440]: 2025-12-11 14:28:05.330 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:06 compute-0 nova_compute[189440]: 2025-12-11 14:28:06.093 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:06 compute-0 nova_compute[189440]: 2025-12-11 14:28:06.339 189444 DEBUG nova.network.neutron [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Successfully created port: 6427f2b4-25ae-460a-8ade-54b5aba9dff6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 11 14:28:07 compute-0 nova_compute[189440]: 2025-12-11 14:28:07.351 189444 DEBUG nova.network.neutron [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Successfully updated port: 6427f2b4-25ae-460a-8ade-54b5aba9dff6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:28:07 compute-0 nova_compute[189440]: 2025-12-11 14:28:07.857 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:28:07 compute-0 nova_compute[189440]: 2025-12-11 14:28:07.857 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquired lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:28:07 compute-0 nova_compute[189440]: 2025-12-11 14:28:07.858 189444 DEBUG nova.network.neutron [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:28:08 compute-0 nova_compute[189440]: 2025-12-11 14:28:08.024 189444 DEBUG nova.compute.manager [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Received event network-changed-6427f2b4-25ae-460a-8ade-54b5aba9dff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:08 compute-0 nova_compute[189440]: 2025-12-11 14:28:08.024 189444 DEBUG nova.compute.manager [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Refreshing instance network info cache due to event network-changed-6427f2b4-25ae-460a-8ade-54b5aba9dff6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:28:08 compute-0 nova_compute[189440]: 2025-12-11 14:28:08.025 189444 DEBUG oslo_concurrency.lockutils [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:28:08 compute-0 nova_compute[189440]: 2025-12-11 14:28:08.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:08 compute-0 nova_compute[189440]: 2025-12-11 14:28:08.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:28:08 compute-0 nova_compute[189440]: 2025-12-11 14:28:08.311 189444 DEBUG nova.network.neutron [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:28:08 compute-0 podman[251975]: 2025-12-11 14:28:08.471625731 +0000 UTC m=+0.076544174 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.616 189444 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.625 189444 DEBUG nova.network.neutron [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updating instance_info_cache with network_info: [{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.644 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Releasing lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.645 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Instance network_info: |[{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.646 189444 DEBUG oslo_concurrency.lockutils [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.646 189444 DEBUG nova.network.neutron [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Refreshing network info cache for port 6427f2b4-25ae-460a-8ade-54b5aba9dff6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.652 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Start _get_guest_xml network_info=[{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '64e29581-a774-4784-b0cb-b4428b3222f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.662 189444 WARNING nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.678 189444 DEBUG nova.virt.libvirt.host [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.680 189444 DEBUG nova.virt.libvirt.host [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.685 189444 DEBUG nova.virt.libvirt.host [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.686 189444 DEBUG nova.virt.libvirt.host [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.687 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.688 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:25:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='639c6f85-2c0f-4003-98b6-94c63eeb9fc7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.689 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.689 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.690 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.690 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.691 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.691 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.692 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.693 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.693 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.693 189444 DEBUG nova.virt.hardware [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.699 189444 DEBUG nova.virt.libvirt.vif [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:26:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-29252937',display_name='tempest-AttachInterfacesUnderV243Test-server-29252937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-29252937',id=7,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwy12ql5A4U6y9Lahfkc1RNRunjGbg199xLNIOKY5tApac0IqSPXNZAcb0M7IxjkkFpjYx6eQiqNNwpx7H2rDoKLMLLd6NVALp4qBWbuEUmRnH5bvJMNrq4lHjDtj7dXQ==',key_name='tempest-keypair-1484208004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0f7c7a5f01c4c7a9fd2fa3668dcd463',ramdisk_id='',reservation_id='r-mn60b6gh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1051755587',owner_user_name='tempest-AttachInterfacesUnderV243Test-1051755587-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:27:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a714564f83e74b39aa33b964e9913421',uuid=1b112e8a-c27d-4b2e-91fc-81552a0cd4ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.700 189444 DEBUG nova.network.os_vif_util [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Converting VIF {"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.701 189444 DEBUG nova.network.os_vif_util [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1f:b8,bridge_name='br-int',has_traffic_filtering=True,id=6427f2b4-25ae-460a-8ade-54b5aba9dff6,network=Network(3a7879e9-5e69-43df-aeae-21ce102a3b8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6427f2b4-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.703 189444 DEBUG nova.objects.instance [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.722 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <uuid>1b112e8a-c27d-4b2e-91fc-81552a0cd4ee</uuid>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <name>instance-00000007</name>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <memory>131072</memory>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-29252937</nova:name>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:28:09</nova:creationTime>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:flavor name="m1.nano">
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:memory>128</nova:memory>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:ephemeral>0</nova:ephemeral>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:user uuid="a714564f83e74b39aa33b964e9913421">tempest-AttachInterfacesUnderV243Test-1051755587-project-member</nova:user>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:project uuid="b0f7c7a5f01c4c7a9fd2fa3668dcd463">tempest-AttachInterfacesUnderV243Test-1051755587</nova:project>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="64e29581-a774-4784-b0cb-b4428b3222f4"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        <nova:port uuid="6427f2b4-25ae-460a-8ade-54b5aba9dff6">
Dec 11 14:28:09 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <system>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <entry name="serial">1b112e8a-c27d-4b2e-91fc-81552a0cd4ee</entry>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <entry name="uuid">1b112e8a-c27d-4b2e-91fc-81552a0cd4ee</entry>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </system>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <os>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </os>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <features>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </features>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.config"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:d2:1f:b8"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <target dev="tap6427f2b4-25"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/console.log" append="off"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <video>
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </video>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:28:09 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:28:09 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:28:09 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:28:09 compute-0 nova_compute[189440]: </domain>
Dec 11 14:28:09 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.724 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Preparing to wait for external event network-vif-plugged-6427f2b4-25ae-460a-8ade-54b5aba9dff6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.724 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Acquiring lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.724 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.725 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.725 189444 DEBUG nova.virt.libvirt.vif [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:26:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-29252937',display_name='tempest-AttachInterfacesUnderV243Test-server-29252937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-29252937',id=7,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwy12ql5A4U6y9Lahfkc1RNRunjGbg199xLNIOKY5tApac0IqSPXNZAcb0M7IxjkkFpjYx6eQiqNNwpx7H2rDoKLMLLd6NVALp4qBWbuEUmRnH5bvJMNrq4lHjDtj7dXQ==',key_name='tempest-keypair-1484208004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0f7c7a5f01c4c7a9fd2fa3668dcd463',ramdisk_id='',reservation_id='r-mn60b6gh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1051755587',owner_user_name='tempest-AttachInterfacesUnderV243Test-1051755587-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:27:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a714564f83e74b39aa33b964e9913421',uuid=1b112e8a-c27d-4b2e-91fc-81552a0cd4ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.726 189444 DEBUG nova.network.os_vif_util [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Converting VIF {"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.726 189444 DEBUG nova.network.os_vif_util [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1f:b8,bridge_name='br-int',has_traffic_filtering=True,id=6427f2b4-25ae-460a-8ade-54b5aba9dff6,network=Network(3a7879e9-5e69-43df-aeae-21ce102a3b8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6427f2b4-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.726 189444 DEBUG os_vif [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1f:b8,bridge_name='br-int',has_traffic_filtering=True,id=6427f2b4-25ae-460a-8ade-54b5aba9dff6,network=Network(3a7879e9-5e69-43df-aeae-21ce102a3b8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6427f2b4-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.727 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.727 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.728 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.731 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.731 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6427f2b4-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.731 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6427f2b4-25, col_values=(('external_ids', {'iface-id': '6427f2b4-25ae-460a-8ade-54b5aba9dff6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:1f:b8', 'vm-uuid': '1b112e8a-c27d-4b2e-91fc-81552a0cd4ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.733 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:09 compute-0 NetworkManager[56353]: <info>  [1765463289.7344] manager: (tap6427f2b4-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.736 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.744 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.746 189444 INFO os_vif [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1f:b8,bridge_name='br-int',has_traffic_filtering=True,id=6427f2b4-25ae-460a-8ade-54b5aba9dff6,network=Network(3a7879e9-5e69-43df-aeae-21ce102a3b8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6427f2b4-25')#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.817 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.818 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.818 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] No VIF found with MAC fa:16:3e:d2:1f:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:28:09 compute-0 nova_compute[189440]: 2025-12-11 14:28:09.819 189444 INFO nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Using config drive#033[00m
Dec 11 14:28:09 compute-0 podman[252000]: 2025-12-11 14:28:09.899331191 +0000 UTC m=+0.098685551 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 11 14:28:09 compute-0 podman[251998]: 2025-12-11 14:28:09.904992516 +0000 UTC m=+0.105902062 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:28:09 compute-0 podman[251999]: 2025-12-11 14:28:09.907567441 +0000 UTC m=+0.108122406 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, name=ubi9)
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.366 189444 INFO nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Creating config drive at /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.config#033[00m
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.372 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdnd_bbv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.518 189444 DEBUG oslo_concurrency.processutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdnd_bbv" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:10 compute-0 kernel: tap6427f2b4-25: entered promiscuous mode
Dec 11 14:28:10 compute-0 NetworkManager[56353]: <info>  [1765463290.5906] manager: (tap6427f2b4-25): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Dec 11 14:28:10 compute-0 ovn_controller[97832]: 2025-12-11T14:28:10Z|00074|binding|INFO|Claiming lport 6427f2b4-25ae-460a-8ade-54b5aba9dff6 for this chassis.
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.593 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 ovn_controller[97832]: 2025-12-11T14:28:10Z|00075|binding|INFO|6427f2b4-25ae-460a-8ade-54b5aba9dff6: Claiming fa:16:3e:d2:1f:b8 10.100.0.4
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.599 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.603 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:1f:b8 10.100.0.4'], port_security=['fa:16:3e:d2:1f:b8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1b112e8a-c27d-4b2e-91fc-81552a0cd4ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a7879e9-5e69-43df-aeae-21ce102a3b8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0f7c7a5f01c4c7a9fd2fa3668dcd463', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1829ebd-ceed-4dce-ac94-145771215a79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1564259d-64d9-45b0-b2ce-9fa6c2c2bb5a, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=6427f2b4-25ae-460a-8ade-54b5aba9dff6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.604 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 6427f2b4-25ae-460a-8ade-54b5aba9dff6 in datapath 3a7879e9-5e69-43df-aeae-21ce102a3b8a bound to our chassis#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.606 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a7879e9-5e69-43df-aeae-21ce102a3b8a#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.618 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[14e3b019-d96f-49c8-8375-c17a59c94c64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.619 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a7879e9-51 in ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.621 239832 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a7879e9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.621 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea1961e-e355-4bc3-9546-c379a4b9ac3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.622 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[e5069488-6de5-45e2-9959-ef51ad1af430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 systemd-udevd[252069]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.635 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[63a5b174-b32a-4f63-9812-3ecefa00caa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 NetworkManager[56353]: <info>  [1765463290.6534] device (tap6427f2b4-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:28:10 compute-0 NetworkManager[56353]: <info>  [1765463290.6604] device (tap6427f2b4-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:28:10 compute-0 ovn_controller[97832]: 2025-12-11T14:28:10Z|00076|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.661 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 ovn_controller[97832]: 2025-12-11T14:28:10Z|00077|binding|INFO|Setting lport 6427f2b4-25ae-460a-8ade-54b5aba9dff6 ovn-installed in OVS
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.663 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6e97a9-e4fe-4db7-a26a-3435cb35da66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_controller[97832]: 2025-12-11T14:28:10Z|00078|binding|INFO|Setting lport 6427f2b4-25ae-460a-8ade-54b5aba9dff6 up in Southbound
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.668 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 systemd-machined[155778]: New machine qemu-7-instance-00000007.
Dec 11 14:28:10 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.693 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c43f68-6d3c-4eef-bc1f-6939d1d9f279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.705 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[506d540c-88dc-4b5d-887c-924fbf527611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 NetworkManager[56353]: <info>  [1765463290.7079] manager: (tap3a7879e9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec 11 14:28:10 compute-0 systemd-udevd[252074]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.735 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[450ea6bb-61c3-4116-ba48-dbc1c644bf1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.739 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[9199a526-cbbd-494b-a070-056a26c87960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 NetworkManager[56353]: <info>  [1765463290.7669] device (tap3a7879e9-50): carrier: link connected
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.772 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[c7060ab8-a0bb-4419-b0e8-d31af9124039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.791 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2cde5c-851a-43d3-a5b5-8a2d1a5a0eb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a7879e9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:d4:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535109, 'reachable_time': 39196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252105, 'error': None, 'target': 'ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.806 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[218bb4d4-12f4-4aab-9055-a27617a9bb48]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:d470'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535109, 'tstamp': 535109}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252106, 'error': None, 'target': 'ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.823 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[df3f1497-7e48-4487-899a-bfeba0200ccb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a7879e9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:d4:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535109, 'reachable_time': 39196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252107, 'error': None, 'target': 'ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.856 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[36bf32f5-a09d-47c5-96bb-8e47beedf5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.918 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[e34c8c97-cfc1-4aa4-8f18-46d900372ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.920 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a7879e9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.920 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.921 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a7879e9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:10 compute-0 kernel: tap3a7879e9-50: entered promiscuous mode
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.923 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 NetworkManager[56353]: <info>  [1765463290.9243] manager: (tap3a7879e9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.929 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a7879e9-50, col_values=(('external_ids', {'iface-id': 'af28a710-cfbd-404b-b1d5-5903ce1a6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:10 compute-0 ovn_controller[97832]: 2025-12-11T14:28:10Z|00079|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.930 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.932 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.932 106686 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a7879e9-5e69-43df-aeae-21ce102a3b8a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a7879e9-5e69-43df-aeae-21ce102a3b8a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.933 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[b149aa9b-db86-4c20-a9b3-5e72e7188e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.934 106686 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: global
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    log         /dev/log local0 debug
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    log-tag     haproxy-metadata-proxy-3a7879e9-5e69-43df-aeae-21ce102a3b8a
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    user        root
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    group       root
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    maxconn     1024
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    pidfile     /var/lib/neutron/external/pids/3a7879e9-5e69-43df-aeae-21ce102a3b8a.pid.haproxy
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    daemon
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: defaults
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    log global
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    mode http
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    option httplog
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    option dontlognull
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    option http-server-close
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    option forwardfor
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    retries                 3
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    timeout http-request    30s
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    timeout connect         30s
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    timeout client          32s
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    timeout server          32s
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    timeout http-keep-alive 30s
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: listen listener
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    bind 169.254.169.254:80
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    server metadata /var/lib/neutron/metadata_proxy
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]:    http-request add-header X-OVN-Network-ID 3a7879e9-5e69-43df-aeae-21ce102a3b8a
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 11 14:28:10 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:10.934 106686 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a', 'env', 'PROCESS_TAG=haproxy-3a7879e9-5e69-43df-aeae-21ce102a3b8a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a7879e9-5e69-43df-aeae-21ce102a3b8a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 11 14:28:10 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.944 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:10.999 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463290.9989183, 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.000 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] VM Started (Lifecycle Event)#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.036 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.047 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463291.0036957, 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.047 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.087 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.091 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.094 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.119 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.326 189444 DEBUG nova.network.neutron [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updated VIF entry in instance network info cache for port 6427f2b4-25ae-460a-8ade-54b5aba9dff6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.328 189444 DEBUG nova.network.neutron [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updating instance_info_cache with network_info: [{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.350 189444 DEBUG oslo_concurrency.lockutils [req-72d5c11f-1096-4dac-8c4b-20980a3c2efd req-cc0d45ac-9835-45e8-8786-9fccb9c21e06 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:28:11 compute-0 podman[252145]: 2025-12-11 14:28:11.358572151 +0000 UTC m=+0.074358871 container create e3d3441ecf4299a4e1625a5bf9f2d0913c3bf050220506ccd8693cf570a0a80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:28:11 compute-0 systemd[1]: Started libpod-conmon-e3d3441ecf4299a4e1625a5bf9f2d0913c3bf050220506ccd8693cf570a0a80b.scope.
Dec 11 14:28:11 compute-0 podman[252145]: 2025-12-11 14:28:11.317908244 +0000 UTC m=+0.033694994 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 14:28:11 compute-0 systemd[1]: Started libcrun container.
Dec 11 14:28:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e66ca720d25220b5303c3eed522dbd0799f8425dd2db52b4add9075bf55c4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 14:28:11 compute-0 podman[252145]: 2025-12-11 14:28:11.473921106 +0000 UTC m=+0.189707916 container init e3d3441ecf4299a4e1625a5bf9f2d0913c3bf050220506ccd8693cf570a0a80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 11 14:28:11 compute-0 podman[252145]: 2025-12-11 14:28:11.481838408 +0000 UTC m=+0.197625168 container start e3d3441ecf4299a4e1625a5bf9f2d0913c3bf050220506ccd8693cf570a0a80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:28:11 compute-0 neutron-haproxy-ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a[252160]: [NOTICE]   (252164) : New worker (252166) forked
Dec 11 14:28:11 compute-0 neutron-haproxy-ovnmeta-3a7879e9-5e69-43df-aeae-21ce102a3b8a[252160]: [NOTICE]   (252164) : Loading success.
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.640 189444 DEBUG nova.compute.manager [req-0166af50-774a-48da-8908-cb122b6e9e1a req-35513326-81f0-40f0-a2e6-61174ab5b1fd a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Received event network-vif-plugged-6427f2b4-25ae-460a-8ade-54b5aba9dff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.641 189444 DEBUG oslo_concurrency.lockutils [req-0166af50-774a-48da-8908-cb122b6e9e1a req-35513326-81f0-40f0-a2e6-61174ab5b1fd a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.641 189444 DEBUG oslo_concurrency.lockutils [req-0166af50-774a-48da-8908-cb122b6e9e1a req-35513326-81f0-40f0-a2e6-61174ab5b1fd a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.641 189444 DEBUG oslo_concurrency.lockutils [req-0166af50-774a-48da-8908-cb122b6e9e1a req-35513326-81f0-40f0-a2e6-61174ab5b1fd a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.642 189444 DEBUG nova.compute.manager [req-0166af50-774a-48da-8908-cb122b6e9e1a req-35513326-81f0-40f0-a2e6-61174ab5b1fd a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Processing event network-vif-plugged-6427f2b4-25ae-460a-8ade-54b5aba9dff6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.642 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.649 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463291.6485894, 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.649 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.652 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.659 189444 INFO nova.virt.libvirt.driver [-] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Instance spawned successfully.#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.659 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.684 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.695 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.700 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.701 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.701 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.702 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.702 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.703 189444 DEBUG nova.virt.libvirt.driver [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.737 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.779 189444 INFO nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Took 24.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.779 189444 DEBUG nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.881 189444 INFO nova.compute.manager [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Took 60.78 seconds to build instance.#033[00m
Dec 11 14:28:11 compute-0 nova_compute[189440]: 2025-12-11 14:28:11.898 189444 DEBUG oslo_concurrency.lockutils [None req-6e26cf76-044a-466a-b414-52c0f978bf51 a714564f83e74b39aa33b964e9913421 b0f7c7a5f01c4c7a9fd2fa3668dcd463 - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 62.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:13 compute-0 nova_compute[189440]: 2025-12-11 14:28:13.848 189444 DEBUG nova.compute.manager [req-cb3fc653-92dd-4872-a452-7412bb07481d req-29039af5-9545-48ba-8fa5-4b8cad859dec a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Received event network-vif-plugged-6427f2b4-25ae-460a-8ade-54b5aba9dff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:13 compute-0 nova_compute[189440]: 2025-12-11 14:28:13.848 189444 DEBUG oslo_concurrency.lockutils [req-cb3fc653-92dd-4872-a452-7412bb07481d req-29039af5-9545-48ba-8fa5-4b8cad859dec a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:13 compute-0 nova_compute[189440]: 2025-12-11 14:28:13.849 189444 DEBUG oslo_concurrency.lockutils [req-cb3fc653-92dd-4872-a452-7412bb07481d req-29039af5-9545-48ba-8fa5-4b8cad859dec a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:13 compute-0 nova_compute[189440]: 2025-12-11 14:28:13.849 189444 DEBUG oslo_concurrency.lockutils [req-cb3fc653-92dd-4872-a452-7412bb07481d req-29039af5-9545-48ba-8fa5-4b8cad859dec a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:13 compute-0 nova_compute[189440]: 2025-12-11 14:28:13.850 189444 DEBUG nova.compute.manager [req-cb3fc653-92dd-4872-a452-7412bb07481d req-29039af5-9545-48ba-8fa5-4b8cad859dec a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] No waiting events found dispatching network-vif-plugged-6427f2b4-25ae-460a-8ade-54b5aba9dff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:28:13 compute-0 nova_compute[189440]: 2025-12-11 14:28:13.850 189444 WARNING nova.compute.manager [req-cb3fc653-92dd-4872-a452-7412bb07481d req-29039af5-9545-48ba-8fa5-4b8cad859dec a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Received unexpected event network-vif-plugged-6427f2b4-25ae-460a-8ade-54b5aba9dff6 for instance with vm_state active and task_state None.#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.701 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.702 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.702 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.702 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:28:14 compute-0 nova_compute[189440]: 2025-12-11 14:28:14.736 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:15 compute-0 podman[252176]: 2025-12-11 14:28:15.604521328 +0000 UTC m=+0.191167700 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:28:16 compute-0 nova_compute[189440]: 2025-12-11 14:28:16.096 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:16 compute-0 nova_compute[189440]: 2025-12-11 14:28:16.913 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updating instance_info_cache with network_info: [{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:28:16 compute-0 nova_compute[189440]: 2025-12-11 14:28:16.947 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:28:16 compute-0 nova_compute[189440]: 2025-12-11 14:28:16.947 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.280 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.281 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.281 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.281 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.409 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.474 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.475 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.549 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.561 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.659 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.661 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:18 compute-0 nova_compute[189440]: 2025-12-11 14:28:18.723 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.096 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.097 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=72.2966537475586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.099 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.099 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.215 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.216 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.216 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.217 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.232 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.248 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.249 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.277 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.321 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.434 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.454 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:28:19 compute-0 podman[252216]: 2025-12-11 14:28:19.475565624 +0000 UTC m=+0.075220025 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec 11 14:28:19 compute-0 podman[252217]: 2025-12-11 14:28:19.477514411 +0000 UTC m=+0.071291181 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.522 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.523 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.523 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.524 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.537 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:28:19 compute-0 nova_compute[189440]: 2025-12-11 14:28:19.741 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:20 compute-0 nova_compute[189440]: 2025-12-11 14:28:20.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:21 compute-0 nova_compute[189440]: 2025-12-11 14:28:21.098 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:24 compute-0 nova_compute[189440]: 2025-12-11 14:28:24.750 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:26 compute-0 nova_compute[189440]: 2025-12-11 14:28:26.101 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:28 compute-0 nova_compute[189440]: 2025-12-11 14:28:28.251 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:28:28 compute-0 nova_compute[189440]: 2025-12-11 14:28:28.253 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:28:29 compute-0 podman[203650]: time="2025-12-11T14:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:28:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:28:29 compute-0 nova_compute[189440]: 2025-12-11 14:28:29.754 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5262 "" "Go-http-client/1.1"
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.105 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.320 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.321 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.350 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: ERROR   14:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: ERROR   14:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: ERROR   14:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: ERROR   14:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: ERROR   14:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:28:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.452 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.453 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.469 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.471 189444 INFO nova.compute.claims [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.703 189444 DEBUG nova.compute.provider_tree [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.941 189444 DEBUG nova.scheduler.client.report [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.980 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:31 compute-0 nova_compute[189440]: 2025-12-11 14:28:31.980 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.024 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.025 189444 DEBUG nova.network.neutron [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.050 189444 INFO nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.076 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.201 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.202 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.203 189444 INFO nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Creating image(s)#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.203 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.204 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.204 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.217 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.308 189444 DEBUG nova.policy [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5fde21296346489db3133bd3ccf4e92f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3e4b83c3ff8a49fb829dba1ec8a2121e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.313 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.314 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "b9398531008bd76fff67b1480b858b505311524e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.315 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.328 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.424 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.425 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.501 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk 1073741824" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.502 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.503 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.579 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.580 189444 DEBUG nova.virt.disk.api [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Checking if we can resize image /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.581 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.645 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.647 189444 DEBUG nova.virt.disk.api [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Cannot resize image /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.647 189444 DEBUG nova.objects.instance [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'migration_context' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.673 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.673 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Ensure instance console log exists: /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.674 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.675 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:32 compute-0 nova_compute[189440]: 2025-12-11 14:28:32.676 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:33 compute-0 podman[252275]: 2025-12-11 14:28:33.513434384 +0000 UTC m=+0.100552725 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:28:33 compute-0 podman[252274]: 2025-12-11 14:28:33.550941129 +0000 UTC m=+0.133319661 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 11 14:28:33 compute-0 nova_compute[189440]: 2025-12-11 14:28:33.799 189444 DEBUG nova.network.neutron [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Successfully created port: 52f6df19-5cbb-49e5-8051-125a414c0f9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 11 14:28:34 compute-0 nova_compute[189440]: 2025-12-11 14:28:34.759 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:34 compute-0 nova_compute[189440]: 2025-12-11 14:28:34.920 189444 DEBUG nova.network.neutron [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Successfully updated port: 52f6df19-5cbb-49e5-8051-125a414c0f9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:28:34 compute-0 nova_compute[189440]: 2025-12-11 14:28:34.936 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:28:34 compute-0 nova_compute[189440]: 2025-12-11 14:28:34.937 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquired lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:28:34 compute-0 nova_compute[189440]: 2025-12-11 14:28:34.937 189444 DEBUG nova.network.neutron [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:28:35 compute-0 nova_compute[189440]: 2025-12-11 14:28:35.049 189444 DEBUG nova.compute.manager [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-changed-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:35 compute-0 nova_compute[189440]: 2025-12-11 14:28:35.050 189444 DEBUG nova.compute.manager [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Refreshing instance network info cache due to event network-changed-52f6df19-5cbb-49e5-8051-125a414c0f9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:28:35 compute-0 nova_compute[189440]: 2025-12-11 14:28:35.051 189444 DEBUG oslo_concurrency.lockutils [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:28:35 compute-0 nova_compute[189440]: 2025-12-11 14:28:35.184 189444 DEBUG nova.network.neutron [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.106 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.112 189444 DEBUG nova.network.neutron [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updating instance_info_cache with network_info: [{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.137 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Releasing lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.137 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance network_info: |[{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.137 189444 DEBUG oslo_concurrency.lockutils [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.137 189444 DEBUG nova.network.neutron [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Refreshing network info cache for port 52f6df19-5cbb-49e5-8051-125a414c0f9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.139 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Start _get_guest_xml network_info=[{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '64e29581-a774-4784-b0cb-b4428b3222f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.147 189444 WARNING nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.153 189444 DEBUG nova.virt.libvirt.host [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.153 189444 DEBUG nova.virt.libvirt.host [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.158 189444 DEBUG nova.virt.libvirt.host [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.158 189444 DEBUG nova.virt.libvirt.host [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.158 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.159 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:25:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='639c6f85-2c0f-4003-98b6-94c63eeb9fc7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.159 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.159 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.159 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.159 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.160 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.160 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.160 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.160 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.160 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.160 189444 DEBUG nova.virt.hardware [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.172 189444 DEBUG nova.virt.libvirt.vif [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-841961376',display_name='tempest-ServerActionsTestJSON-server-841961376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-841961376',id=8,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4uuTromKvYazAi/ZcTswvYdpFQO/eOeQ0R7nGbb/Zq0OYhVFvcR4MV0lRBAAEY0tvtOkCbrPDklymzrDzA6JNjcl5/XMDAWsZbYP/ZSp/w8oqE1UIbRS8HSekXLExQxw==',key_name='tempest-keypair-991552200',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e4b83c3ff8a49fb829dba1ec8a2121e',ramdisk_id='',reservation_id='r-d24sbuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-954728080',owner_user_name='tempest-ServerActionsTestJSON-954728080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:28:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fde21296346489db3133bd3ccf4e92f',uuid=c76d24aa-f7f9-49a6-b248-ab2d703c2930,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.173 189444 DEBUG nova.network.os_vif_util [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converting VIF {"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.173 189444 DEBUG nova.network.os_vif_util [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.174 189444 DEBUG nova.objects.instance [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'pci_devices' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.195 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <uuid>c76d24aa-f7f9-49a6-b248-ab2d703c2930</uuid>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <name>instance-00000008</name>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <memory>131072</memory>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:name>tempest-ServerActionsTestJSON-server-841961376</nova:name>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:28:36</nova:creationTime>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:flavor name="m1.nano">
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:memory>128</nova:memory>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:ephemeral>0</nova:ephemeral>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:user uuid="5fde21296346489db3133bd3ccf4e92f">tempest-ServerActionsTestJSON-954728080-project-member</nova:user>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:project uuid="3e4b83c3ff8a49fb829dba1ec8a2121e">tempest-ServerActionsTestJSON-954728080</nova:project>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="64e29581-a774-4784-b0cb-b4428b3222f4"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        <nova:port uuid="52f6df19-5cbb-49e5-8051-125a414c0f9f">
Dec 11 14:28:36 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <system>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <entry name="serial">c76d24aa-f7f9-49a6-b248-ab2d703c2930</entry>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <entry name="uuid">c76d24aa-f7f9-49a6-b248-ab2d703c2930</entry>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </system>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <os>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </os>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <features>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </features>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:26:c9:b5"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <target dev="tap52f6df19-5c"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/console.log" append="off"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <video>
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </video>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:28:36 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:28:36 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:28:36 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:28:36 compute-0 nova_compute[189440]: </domain>
Dec 11 14:28:36 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.196 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Preparing to wait for external event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.196 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.197 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.197 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.198 189444 DEBUG nova.virt.libvirt.vif [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-11T14:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-841961376',display_name='tempest-ServerActionsTestJSON-server-841961376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-841961376',id=8,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4uuTromKvYazAi/ZcTswvYdpFQO/eOeQ0R7nGbb/Zq0OYhVFvcR4MV0lRBAAEY0tvtOkCbrPDklymzrDzA6JNjcl5/XMDAWsZbYP/ZSp/w8oqE1UIbRS8HSekXLExQxw==',key_name='tempest-keypair-991552200',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e4b83c3ff8a49fb829dba1ec8a2121e',ramdisk_id='',reservation_id='r-d24sbuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-954728080',owner_user_name='tempest-ServerActionsTestJSON-954728080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:28:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fde21296346489db3133bd3ccf4e92f',uuid=c76d24aa-f7f9-49a6-b248-ab2d703c2930,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.198 189444 DEBUG nova.network.os_vif_util [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converting VIF {"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.199 189444 DEBUG nova.network.os_vif_util [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.199 189444 DEBUG os_vif [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.201 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.203 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.205 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.216 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.218 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f6df19-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.219 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f6df19-5c, col_values=(('external_ids', {'iface-id': '52f6df19-5cbb-49e5-8051-125a414c0f9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:c9:b5', 'vm-uuid': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.221 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 NetworkManager[56353]: <info>  [1765463316.2244] manager: (tap52f6df19-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.225 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.239 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.241 189444 INFO os_vif [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c')#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.314 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.315 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.316 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] No VIF found with MAC fa:16:3e:26:c9:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.317 189444 INFO nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Using config drive#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.648 189444 INFO nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Creating config drive at /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.661 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqtidxn7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.813 189444 DEBUG oslo_concurrency.processutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqtidxn7b" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:28:36 compute-0 kernel: tap52f6df19-5c: entered promiscuous mode
Dec 11 14:28:36 compute-0 NetworkManager[56353]: <info>  [1765463316.8712] manager: (tap52f6df19-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Dec 11 14:28:36 compute-0 ovn_controller[97832]: 2025-12-11T14:28:36Z|00080|binding|INFO|Claiming lport 52f6df19-5cbb-49e5-8051-125a414c0f9f for this chassis.
Dec 11 14:28:36 compute-0 ovn_controller[97832]: 2025-12-11T14:28:36Z|00081|binding|INFO|52f6df19-5cbb-49e5-8051-125a414c0f9f: Claiming fa:16:3e:26:c9:b5 10.100.0.8
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.873 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.882 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.917 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c9:b5 10.100.0.8'], port_security=['fa:16:3e:26:c9:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81fb21e1-e42a-429c-bdb6-a671b908997f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e4b83c3ff8a49fb829dba1ec8a2121e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fd90c69-6fef-4c09-94ec-ce2f215b43eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f65c3ca-604c-4a31-a0d6-f4b05c29492f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=52f6df19-5cbb-49e5-8051-125a414c0f9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.918 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 52f6df19-5cbb-49e5-8051-125a414c0f9f in datapath 81fb21e1-e42a-429c-bdb6-a671b908997f bound to our chassis#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.921 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81fb21e1-e42a-429c-bdb6-a671b908997f#033[00m
Dec 11 14:28:36 compute-0 systemd-udevd[252332]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:28:36 compute-0 NetworkManager[56353]: <info>  [1765463316.9470] device (tap52f6df19-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.935 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[dabe5ac8-80ce-4918-bc97-d569bf487d79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.941 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81fb21e1-e1 in ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.943 239832 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81fb21e1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.943 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f37ae4-cf09-46ba-b246-c45e0f4229a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.944 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ece644-f937-48a4-bb5c-1313ea57f05b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:36 compute-0 NetworkManager[56353]: <info>  [1765463316.9514] device (tap52f6df19-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.951 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 ovn_controller[97832]: 2025-12-11T14:28:36Z|00082|binding|INFO|Setting lport 52f6df19-5cbb-49e5-8051-125a414c0f9f ovn-installed in OVS
Dec 11 14:28:36 compute-0 ovn_controller[97832]: 2025-12-11T14:28:36Z|00083|binding|INFO|Setting lport 52f6df19-5cbb-49e5-8051-125a414c0f9f up in Southbound
Dec 11 14:28:36 compute-0 nova_compute[189440]: 2025-12-11 14:28:36.962 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:36 compute-0 systemd-machined[155778]: New machine qemu-8-instance-00000008.
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.964 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[dd01998a-4721-487b-b06d-a5984e243728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:36 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec 11 14:28:36 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:36.991 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[27b883cb-daaf-4953-9930-e52c81a5fcf3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.022 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[908e82e9-7405-47b3-9a42-895dee15b9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.029 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f1da7b-a03c-427a-ab66-7eb6c447a67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 NetworkManager[56353]: <info>  [1765463317.0306] manager: (tap81fb21e1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.069 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[f558814b-cbe0-440c-9701-6d52ab443cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.073 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[37bf699a-b954-4ea8-ac2a-4a41093d5f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 NetworkManager[56353]: <info>  [1765463317.1096] device (tap81fb21e1-e0): carrier: link connected
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.119 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[5144ad5e-829b-4f83-8e94-710c20ce54a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.136 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[589145b2-1165-477f-8435-8b812659e9bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81fb21e1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:97:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537743, 'reachable_time': 16234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252368, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.158 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[36cfedd3-5a04-4d12-bcf9-6b3962e9f706]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:97e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537743, 'tstamp': 537743}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252370, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.172 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[1729cbfc-ce1b-4100-b2db-55191ae98cf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81fb21e1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:97:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537743, 'reachable_time': 16234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252371, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.205 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[da5fd989-222c-45b1-97d6-3e8123240509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.270 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7bca23-8749-44a7-9b51-e462473d5d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.272 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81fb21e1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.272 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.272 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81fb21e1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.275 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:37 compute-0 NetworkManager[56353]: <info>  [1765463317.2759] manager: (tap81fb21e1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec 11 14:28:37 compute-0 kernel: tap81fb21e1-e0: entered promiscuous mode
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.282 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.285 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81fb21e1-e0, col_values=(('external_ids', {'iface-id': '0c7654b9-d19e-4dbf-aa95-fd31082835ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.287 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:37 compute-0 ovn_controller[97832]: 2025-12-11T14:28:37Z|00084|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.312 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.315 106686 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81fb21e1-e42a-429c-bdb6-a671b908997f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81fb21e1-e42a-429c-bdb6-a671b908997f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.315 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.318 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5fd7b9-42af-47c5-be1f-4b2bc85dad1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.319 106686 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: global
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    log         /dev/log local0 debug
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    log-tag     haproxy-metadata-proxy-81fb21e1-e42a-429c-bdb6-a671b908997f
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    user        root
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    group       root
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    maxconn     1024
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    pidfile     /var/lib/neutron/external/pids/81fb21e1-e42a-429c-bdb6-a671b908997f.pid.haproxy
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    daemon
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: defaults
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    log global
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    mode http
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    option httplog
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    option dontlognull
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    option http-server-close
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    option forwardfor
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    retries                 3
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    timeout http-request    30s
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    timeout connect         30s
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    timeout client          32s
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    timeout server          32s
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    timeout http-keep-alive 30s
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: listen listener
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    bind 169.254.169.254:80
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    server metadata /var/lib/neutron/metadata_proxy
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]:    http-request add-header X-OVN-Network-ID 81fb21e1-e42a-429c-bdb6-a671b908997f
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 11 14:28:37 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:37.323 106686 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'env', 'PROCESS_TAG=haproxy-81fb21e1-e42a-429c-bdb6-a671b908997f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81fb21e1-e42a-429c-bdb6-a671b908997f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.665 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463317.6600928, c76d24aa-f7f9-49a6-b248-ab2d703c2930 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.666 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] VM Started (Lifecycle Event)#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.696 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.702 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463317.6605546, c76d24aa-f7f9-49a6-b248-ab2d703c2930 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.702 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.744 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.751 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.756 189444 DEBUG nova.compute.manager [req-d58d78ea-ad00-4cc6-ad00-6051eb34ba43 req-cb35bca7-3d50-412a-95a3-249d915a3b7b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.757 189444 DEBUG oslo_concurrency.lockutils [req-d58d78ea-ad00-4cc6-ad00-6051eb34ba43 req-cb35bca7-3d50-412a-95a3-249d915a3b7b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.757 189444 DEBUG oslo_concurrency.lockutils [req-d58d78ea-ad00-4cc6-ad00-6051eb34ba43 req-cb35bca7-3d50-412a-95a3-249d915a3b7b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.758 189444 DEBUG oslo_concurrency.lockutils [req-d58d78ea-ad00-4cc6-ad00-6051eb34ba43 req-cb35bca7-3d50-412a-95a3-249d915a3b7b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.758 189444 DEBUG nova.compute.manager [req-d58d78ea-ad00-4cc6-ad00-6051eb34ba43 req-cb35bca7-3d50-412a-95a3-249d915a3b7b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Processing event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.762 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.772 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.775 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.776 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463317.7713742, c76d24aa-f7f9-49a6-b248-ab2d703c2930 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.777 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.782 189444 INFO nova.virt.libvirt.driver [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance spawned successfully.#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.783 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:28:37 compute-0 podman[252409]: 2025-12-11 14:28:37.786017788 +0000 UTC m=+0.079469250 container create 822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.806 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.818 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.819 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.819 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.820 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.820 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.821 189444 DEBUG nova.virt.libvirt.driver [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:28:37 compute-0 systemd[1]: Started libpod-conmon-822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14.scope.
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.827 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:28:37 compute-0 podman[252409]: 2025-12-11 14:28:37.741441787 +0000 UTC m=+0.034893259 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 14:28:37 compute-0 systemd[1]: Started libcrun container.
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.862 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5776530dd1f93ea8275fb2e29c92965869b28e15da146400118acc05b1ef236c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 14:28:37 compute-0 podman[252409]: 2025-12-11 14:28:37.882138362 +0000 UTC m=+0.175589844 container init 822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 14:28:37 compute-0 podman[252409]: 2025-12-11 14:28:37.892189026 +0000 UTC m=+0.185640488 container start 822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.901 189444 INFO nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Took 5.70 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.902 189444 DEBUG nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:28:37 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[252422]: [NOTICE]   (252426) : New worker (252428) forked
Dec 11 14:28:37 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[252422]: [NOTICE]   (252426) : Loading success.
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.955 189444 DEBUG nova.network.neutron [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updated VIF entry in instance network info cache for port 52f6df19-5cbb-49e5-8051-125a414c0f9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.956 189444 DEBUG nova.network.neutron [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updating instance_info_cache with network_info: [{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.975 189444 DEBUG oslo_concurrency.lockutils [req-e850b94c-037e-4d98-b71c-69721f7ca950 req-6935adf1-da47-46e6-a513-22eff5948db6 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:28:37 compute-0 nova_compute[189440]: 2025-12-11 14:28:37.993 189444 INFO nova.compute.manager [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Took 6.58 seconds to build instance.#033[00m
Dec 11 14:28:38 compute-0 nova_compute[189440]: 2025-12-11 14:28:38.029 189444 DEBUG oslo_concurrency.lockutils [None req-c300ca5f-012f-4968-9ba2-696c2d469728 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:39 compute-0 podman[252437]: 2025-12-11 14:28:39.509608942 +0000 UTC m=+0.101911075 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm)
Dec 11 14:28:40 compute-0 podman[252456]: 2025-12-11 14:28:40.498677453 +0000 UTC m=+0.081121708 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, vendor=Red Hat, Inc.)
Dec 11 14:28:40 compute-0 podman[252455]: 2025-12-11 14:28:40.51911931 +0000 UTC m=+0.104626224 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:28:40 compute-0 podman[252457]: 2025-12-11 14:28:40.543217093 +0000 UTC m=+0.116927813 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:28:40 compute-0 nova_compute[189440]: 2025-12-11 14:28:40.731 189444 DEBUG nova.compute.manager [req-3c2cfe37-0892-45ee-b941-b4dae25085c5 req-f26bbe65-3e5b-409c-bd16-b5c1071af2b1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:40 compute-0 nova_compute[189440]: 2025-12-11 14:28:40.731 189444 DEBUG oslo_concurrency.lockutils [req-3c2cfe37-0892-45ee-b941-b4dae25085c5 req-f26bbe65-3e5b-409c-bd16-b5c1071af2b1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:28:40 compute-0 nova_compute[189440]: 2025-12-11 14:28:40.732 189444 DEBUG oslo_concurrency.lockutils [req-3c2cfe37-0892-45ee-b941-b4dae25085c5 req-f26bbe65-3e5b-409c-bd16-b5c1071af2b1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:28:40 compute-0 nova_compute[189440]: 2025-12-11 14:28:40.732 189444 DEBUG oslo_concurrency.lockutils [req-3c2cfe37-0892-45ee-b941-b4dae25085c5 req-f26bbe65-3e5b-409c-bd16-b5c1071af2b1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:28:40 compute-0 nova_compute[189440]: 2025-12-11 14:28:40.732 189444 DEBUG nova.compute.manager [req-3c2cfe37-0892-45ee-b941-b4dae25085c5 req-f26bbe65-3e5b-409c-bd16-b5c1071af2b1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:28:40 compute-0 nova_compute[189440]: 2025-12-11 14:28:40.733 189444 WARNING nova.compute.manager [req-3c2cfe37-0892-45ee-b941-b4dae25085c5 req-f26bbe65-3e5b-409c-bd16-b5c1071af2b1 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received unexpected event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with vm_state active and task_state None.#033[00m
Dec 11 14:28:41 compute-0 nova_compute[189440]: 2025-12-11 14:28:41.110 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:41 compute-0 nova_compute[189440]: 2025-12-11 14:28:41.222 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:44 compute-0 NetworkManager[56353]: <info>  [1765463324.1731] manager: (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec 11 14:28:44 compute-0 NetworkManager[56353]: <info>  [1765463324.1743] manager: (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.175 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.314 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:44 compute-0 ovn_controller[97832]: 2025-12-11T14:28:44Z|00085|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:28:44 compute-0 ovn_controller[97832]: 2025-12-11T14:28:44Z|00086|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:28:44 compute-0 ovn_controller[97832]: 2025-12-11T14:28:44Z|00087|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.344 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.718 189444 DEBUG nova.compute.manager [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-changed-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.719 189444 DEBUG nova.compute.manager [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Refreshing instance network info cache due to event network-changed-52f6df19-5cbb-49e5-8051-125a414c0f9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.720 189444 DEBUG oslo_concurrency.lockutils [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.720 189444 DEBUG oslo_concurrency.lockutils [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:28:44 compute-0 nova_compute[189440]: 2025-12-11 14:28:44.721 189444 DEBUG nova.network.neutron [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Refreshing network info cache for port 52f6df19-5cbb-49e5-8051-125a414c0f9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:28:46 compute-0 nova_compute[189440]: 2025-12-11 14:28:46.113 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:46 compute-0 nova_compute[189440]: 2025-12-11 14:28:46.225 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:46 compute-0 podman[252517]: 2025-12-11 14:28:46.525940156 +0000 UTC m=+0.127769247 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 11 14:28:47 compute-0 ovn_controller[97832]: 2025-12-11T14:28:47Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:1f:b8 10.100.0.4
Dec 11 14:28:47 compute-0 ovn_controller[97832]: 2025-12-11T14:28:47Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:1f:b8 10.100.0.4
Dec 11 14:28:47 compute-0 nova_compute[189440]: 2025-12-11 14:28:47.341 189444 DEBUG nova.network.neutron [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updated VIF entry in instance network info cache for port 52f6df19-5cbb-49e5-8051-125a414c0f9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:28:47 compute-0 nova_compute[189440]: 2025-12-11 14:28:47.342 189444 DEBUG nova.network.neutron [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updating instance_info_cache with network_info: [{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:28:47 compute-0 nova_compute[189440]: 2025-12-11 14:28:47.362 189444 DEBUG oslo_concurrency.lockutils [req-db8e2017-2a1b-47f2-bde7-4a20e6215ee1 req-052d1cbe-22bb-4e3e-9944-5c5fbe07d00d a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:28:50 compute-0 podman[252549]: 2025-12-11 14:28:50.504854669 +0000 UTC m=+0.089800805 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350)
Dec 11 14:28:50 compute-0 podman[252550]: 2025-12-11 14:28:50.512145708 +0000 UTC m=+0.090582634 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:28:51 compute-0 nova_compute[189440]: 2025-12-11 14:28:51.117 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:51 compute-0 nova_compute[189440]: 2025-12-11 14:28:51.228 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:54.240 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:28:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:54.242 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:28:54 compute-0 nova_compute[189440]: 2025-12-11 14:28:54.250 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:56 compute-0 nova_compute[189440]: 2025-12-11 14:28:56.121 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:56 compute-0 nova_compute[189440]: 2025-12-11 14:28:56.231 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:28:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:28:56.246 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:28:59 compute-0 podman[203650]: time="2025-12-11T14:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:28:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:28:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5730 "" "Go-http-client/1.1"
Dec 11 14:29:01 compute-0 nova_compute[189440]: 2025-12-11 14:29:01.124 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:01 compute-0 nova_compute[189440]: 2025-12-11 14:29:01.234 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: ERROR   14:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: ERROR   14:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: ERROR   14:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: ERROR   14:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: ERROR   14:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:29:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:29:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:04.111 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:04.112 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:04.113 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:04 compute-0 podman[252591]: 2025-12-11 14:29:04.49187245 +0000 UTC m=+0.083934921 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 11 14:29:04 compute-0 podman[252592]: 2025-12-11 14:29:04.492792872 +0000 UTC m=+0.080796345 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:29:05 compute-0 nova_compute[189440]: 2025-12-11 14:29:05.257 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:06 compute-0 nova_compute[189440]: 2025-12-11 14:29:06.127 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:06 compute-0 nova_compute[189440]: 2025-12-11 14:29:06.236 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:09 compute-0 nova_compute[189440]: 2025-12-11 14:29:09.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:09 compute-0 nova_compute[189440]: 2025-12-11 14:29:09.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:29:10 compute-0 nova_compute[189440]: 2025-12-11 14:29:10.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:10 compute-0 podman[252634]: 2025-12-11 14:29:10.512523532 +0000 UTC m=+0.103134123 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:29:10 compute-0 podman[252656]: 2025-12-11 14:29:10.612923655 +0000 UTC m=+0.063822077 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:29:10 compute-0 podman[252655]: 2025-12-11 14:29:10.624356906 +0000 UTC m=+0.075687189 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:29:10 compute-0 podman[252689]: 2025-12-11 14:29:10.704679677 +0000 UTC m=+0.066328149 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 14:29:11 compute-0 nova_compute[189440]: 2025-12-11 14:29:11.130 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:11 compute-0 nova_compute[189440]: 2025-12-11 14:29:11.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:11 compute-0 nova_compute[189440]: 2025-12-11 14:29:11.239 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:12 compute-0 ovn_controller[97832]: 2025-12-11T14:29:12Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:c9:b5 10.100.0.8
Dec 11 14:29:12 compute-0 ovn_controller[97832]: 2025-12-11T14:29:12Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:c9:b5 10.100.0.8
Dec 11 14:29:14 compute-0 nova_compute[189440]: 2025-12-11 14:29:14.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:14 compute-0 nova_compute[189440]: 2025-12-11 14:29:14.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:29:14 compute-0 nova_compute[189440]: 2025-12-11 14:29:14.702 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:29:14 compute-0 nova_compute[189440]: 2025-12-11 14:29:14.702 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:29:14 compute-0 nova_compute[189440]: 2025-12-11 14:29:14.703 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:29:16 compute-0 nova_compute[189440]: 2025-12-11 14:29:16.135 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:16 compute-0 nova_compute[189440]: 2025-12-11 14:29:16.241 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:16 compute-0 nova_compute[189440]: 2025-12-11 14:29:16.619 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updating instance_info_cache with network_info: [{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:29:16 compute-0 nova_compute[189440]: 2025-12-11 14:29:16.644 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:29:16 compute-0 nova_compute[189440]: 2025-12-11 14:29:16.645 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:29:17 compute-0 podman[252729]: 2025-12-11 14:29:17.543133379 +0000 UTC m=+0.139844253 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.264 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.265 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.265 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.266 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.383 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.481 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.482 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.565 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.574 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.634 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.635 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.693 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.703 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.765 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.766 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.790 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:19 compute-0 nova_compute[189440]: 2025-12-11 14:29:19.832 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.288 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.290 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4864MB free_disk=72.23962020874023GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.291 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.292 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.443 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.444 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.444 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance c76d24aa-f7f9-49a6-b248-ab2d703c2930 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.445 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.445 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.680 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.696 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.716 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:29:20 compute-0 nova_compute[189440]: 2025-12-11 14:29:20.717 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:21 compute-0 nova_compute[189440]: 2025-12-11 14:29:21.137 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:21 compute-0 nova_compute[189440]: 2025-12-11 14:29:21.244 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:21 compute-0 podman[252775]: 2025-12-11 14:29:21.467190976 +0000 UTC m=+0.062024264 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:29:21 compute-0 podman[252774]: 2025-12-11 14:29:21.497464718 +0000 UTC m=+0.095809372 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 11 14:29:21 compute-0 nova_compute[189440]: 2025-12-11 14:29:21.713 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:22 compute-0 nova_compute[189440]: 2025-12-11 14:29:22.030 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:29:26 compute-0 nova_compute[189440]: 2025-12-11 14:29:26.141 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:26 compute-0 nova_compute[189440]: 2025-12-11 14:29:26.246 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:26 compute-0 ovn_controller[97832]: 2025-12-11T14:29:26Z|00088|memory|INFO|peak resident set size grew 51% in last 2691.4 seconds, from 16000 kB to 24096 kB
Dec 11 14:29:26 compute-0 ovn_controller[97832]: 2025-12-11T14:29:26Z|00089|memory|INFO|idl-cells-OVN_Southbound:10070 idl-cells-Open_vSwitch:927 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:345 lflow-cache-entries-cache-matches:287 lflow-cache-size-KB:1441 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:623 ofctrl_installed_flow_usage-KB:455 ofctrl_sb_flow_ref_usage-KB:235
Dec 11 14:29:26 compute-0 nova_compute[189440]: 2025-12-11 14:29:26.390 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:29 compute-0 podman[203650]: time="2025-12-11T14:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:29:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:29:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5732 "" "Go-http-client/1.1"
Dec 11 14:29:31 compute-0 nova_compute[189440]: 2025-12-11 14:29:31.144 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:31 compute-0 nova_compute[189440]: 2025-12-11 14:29:31.247 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: ERROR   14:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: ERROR   14:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: ERROR   14:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: ERROR   14:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: ERROR   14:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:29:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:29:33 compute-0 nova_compute[189440]: 2025-12-11 14:29:33.103 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:35 compute-0 podman[252819]: 2025-12-11 14:29:35.507437338 +0000 UTC m=+0.090214725 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:29:35 compute-0 podman[252818]: 2025-12-11 14:29:35.51810796 +0000 UTC m=+0.116354387 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 11 14:29:36 compute-0 nova_compute[189440]: 2025-12-11 14:29:36.137 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:36 compute-0 nova_compute[189440]: 2025-12-11 14:29:36.146 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:36 compute-0 nova_compute[189440]: 2025-12-11 14:29:36.250 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:41 compute-0 nova_compute[189440]: 2025-12-11 14:29:41.148 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:41 compute-0 nova_compute[189440]: 2025-12-11 14:29:41.252 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:41 compute-0 podman[252861]: 2025-12-11 14:29:41.48731362 +0000 UTC m=+0.073419683 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, release-0.7.12=, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 11 14:29:41 compute-0 podman[252863]: 2025-12-11 14:29:41.514734013 +0000 UTC m=+0.096038179 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:29:41 compute-0 podman[252862]: 2025-12-11 14:29:41.529654958 +0000 UTC m=+0.101063861 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 14:29:41 compute-0 podman[252860]: 2025-12-11 14:29:41.541688484 +0000 UTC m=+0.117488814 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.993 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.995 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3ea1bf7aa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.002 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c76d24aa-f7f9-49a6-b248-ab2d703c2930 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:29:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:43.003 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c76d24aa-f7f9-49a6-b248-ab2d703c2930 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:29:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:44.219 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1978 Content-Type: application/json Date: Thu, 11 Dec 2025 14:29:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a1a98d58-42f7-413f-a11c-96454967e346 x-openstack-request-id: req-a1a98d58-42f7-413f-a11c-96454967e346 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:29:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:44.219 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c76d24aa-f7f9-49a6-b248-ab2d703c2930", "name": "tempest-ServerActionsTestJSON-server-841961376", "status": "ACTIVE", "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "user_id": "5fde21296346489db3133bd3ccf4e92f", "metadata": {}, "hostId": "bfa6967790821cb524e075f36751eee0133913381a30dd5207c82b07", "image": {"id": "64e29581-a774-4784-b0cb-b4428b3222f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/64e29581-a774-4784-b0cb-b4428b3222f4"}]}, "flavor": {"id": "639c6f85-2c0f-4003-98b6-94c63eeb9fc7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/639c6f85-2c0f-4003-98b6-94c63eeb9fc7"}]}, "created": "2025-12-11T14:28:29Z", "updated": "2025-12-11T14:28:37Z", "addresses": {"tempest-ServerActionsTestJSON-543415014-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:26:c9:b5"}, {"version": 4, "addr": "192.168.122.225", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:26:c9:b5"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c76d24aa-f7f9-49a6-b248-ab2d703c2930"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c76d24aa-f7f9-49a6-b248-ab2d703c2930"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-991552200", "OS-SRV-USG:launched_at": "2025-12-11T14:28:37.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1044664559"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:29:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:44.219 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c76d24aa-f7f9-49a6-b248-ab2d703c2930 used request id req-a1a98d58-42f7-413f-a11c-96454967e346 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:29:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:44.221 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930', 'name': 'tempest-ServerActionsTestJSON-server-841961376', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3e4b83c3ff8a49fb829dba1ec8a2121e', 'user_id': '5fde21296346489db3133bd3ccf4e92f', 'hostId': 'bfa6967790821cb524e075f36751eee0133913381a30dd5207c82b07', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:29:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:44.225 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:29:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:44.225 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.419 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1871 Content-Type: application/json Date: Thu, 11 Dec 2025 14:29:44 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ebe0031e-5aeb-4735-aa2c-e9e94d342f69 x-openstack-request-id: req-ebe0031e-5aeb-4735-aa2c-e9e94d342f69 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.419 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee", "name": "tempest-AttachInterfacesUnderV243Test-server-29252937", "status": "ACTIVE", "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "user_id": "a714564f83e74b39aa33b964e9913421", "metadata": {}, "hostId": "5dbf343690864d1983c881e8bc082672162e288a5198d8460c1b4743", "image": {"id": "64e29581-a774-4784-b0cb-b4428b3222f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/64e29581-a774-4784-b0cb-b4428b3222f4"}]}, "flavor": {"id": "639c6f85-2c0f-4003-98b6-94c63eeb9fc7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/639c6f85-2c0f-4003-98b6-94c63eeb9fc7"}]}, "created": "2025-12-11T14:26:53Z", "updated": "2025-12-11T14:28:11Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-980185420-network": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d2:1f:b8"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1484208004", "OS-SRV-USG:launched_at": "2025-12-11T14:28:11.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--821815401"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.420 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee used request id req-ebe0031e-5aeb-4735-aa2c-e9e94d342f69 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.421 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b112e8a-c27d-4b2e-91fc-81552a0cd4ee', 'name': 'tempest-AttachInterfacesUnderV243Test-server-29252937', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b0f7c7a5f01c4c7a9fd2fa3668dcd463', 'user_id': 'a714564f83e74b39aa33b964e9913421', 'hostId': '5dbf343690864d1983c881e8bc082672162e288a5198d8460c1b4743', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.426 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64b46b2-b462-4f18-99a0-33cce11b70c3', 'name': 'tempest-ServerAddressesTestJSON-server-1930571022', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16cfe265641045f6adca23a64917736e', 'user_id': '719b5c4df50d474091f6f471803c8a13', 'hostId': '2fcddfdd3b298ab69316782a145f6113cf5f677ad9bc894793473b66', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.426 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:29:45.426931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.430 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c76d24aa-f7f9-49a6-b248-ab2d703c2930 / tap52f6df19-5c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.431 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.435 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee / tap6427f2b4-25 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.435 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.439 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.439 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:29:45.439956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.472 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/cpu volume: 35190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.496 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/cpu volume: 35010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.522 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/cpu volume: 36040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:29:45.525593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.541 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.allocation volume: 30613504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.542 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.554 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.555 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.567 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.568 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.570 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.570 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.571 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:29:45.569588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.572 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.573 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/memory.usage volume: 42.6796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.573 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/memory.usage volume: 46.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:29:45.572735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.574 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/memory.usage volume: 41.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.576 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.incoming.bytes volume: 1706 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:29:45.575823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.576 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.577 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-11T14:29:45.578478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.579 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.579 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-841961376>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-29252937>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-841961376>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-29252937>]
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.580 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.581 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.581 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.581 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:29:45.580919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.583 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.584 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.584 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:29:45.583468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:29:45.586881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.587 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.587 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.588 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.590 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.590 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.591 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:29:45.589743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.591 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.592 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.592 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:29:45.594267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.630 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.read.latency volume: 525008661 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.630 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.read.latency volume: 52118971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.668 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 509451213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.669 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 51551775 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.710 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 715818456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.711 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 141083317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.713 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.714 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.read.requests volume: 1110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.714 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:29:45.713569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.715 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.715 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.716 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.716 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.718 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.719 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.720 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:29:45.719041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.720 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.721 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.722 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.722 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.724 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.724 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.write.bytes volume: 72953856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.724 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:29:45.724326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.725 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 72957952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.725 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.726 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 73019392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.726 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.728 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.write.latency volume: 3349800959 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.728 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.728 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 4346594223 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.728 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.729 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 10586132488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.729 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.730 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:29:45.727907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.731 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.731 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.732 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:29:45.731255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.733 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.write.requests volume: 307 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.734 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.734 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.734 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.735 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.735 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:29:45.733684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.736 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.737 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.737 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:29:45.736819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.739 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.739 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-841961376>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-29252937>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-841961376>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-29252937>]
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-11T14:29:45.738903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.739 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.741 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:29:45.740197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:29:45.741607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.742 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.742 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.744 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:29:45.743614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.745 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.745 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.746 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.746 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:29:45.745467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.747 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.747 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.748 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.748 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:29:45.747737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.748 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.749 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.749 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.750 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.750 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.750 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.read.bytes volume: 30755328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:29:45.750655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.751 14 DEBUG ceilometer.compute.pollsters [-] c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.752 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.752 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.752 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.753 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:45 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:29:45.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:29:46 compute-0 nova_compute[189440]: 2025-12-11 14:29:46.152 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:46 compute-0 nova_compute[189440]: 2025-12-11 14:29:46.255 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:48 compute-0 ovn_controller[97832]: 2025-12-11T14:29:48Z|00090|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:29:48 compute-0 ovn_controller[97832]: 2025-12-11T14:29:48Z|00091|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:29:48 compute-0 ovn_controller[97832]: 2025-12-11T14:29:48Z|00092|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:29:48 compute-0 nova_compute[189440]: 2025-12-11 14:29:48.435 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:48 compute-0 podman[252933]: 2025-12-11 14:29:48.642493787 +0000 UTC m=+0.185479753 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 14:29:50 compute-0 nova_compute[189440]: 2025-12-11 14:29:50.678 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:50 compute-0 nova_compute[189440]: 2025-12-11 14:29:50.679 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:50 compute-0 nova_compute[189440]: 2025-12-11 14:29:50.681 189444 INFO nova.compute.manager [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Rebooting instance#033[00m
Dec 11 14:29:50 compute-0 nova_compute[189440]: 2025-12-11 14:29:50.700 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:29:50 compute-0 nova_compute[189440]: 2025-12-11 14:29:50.701 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquired lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:29:50 compute-0 nova_compute[189440]: 2025-12-11 14:29:50.702 189444 DEBUG nova.network.neutron [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:29:51 compute-0 nova_compute[189440]: 2025-12-11 14:29:51.154 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:51 compute-0 nova_compute[189440]: 2025-12-11 14:29:51.257 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:52 compute-0 podman[252974]: 2025-12-11 14:29:52.473003117 +0000 UTC m=+0.066759049 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:29:52 compute-0 podman[252973]: 2025-12-11 14:29:52.485852792 +0000 UTC m=+0.085493209 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.554 189444 DEBUG nova.network.neutron [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updating instance_info_cache with network_info: [{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.585 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Releasing lock "refresh_cache-c76d24aa-f7f9-49a6-b248-ab2d703c2930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.586 189444 DEBUG nova.compute.manager [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:29:52 compute-0 kernel: tap52f6df19-5c (unregistering): left promiscuous mode
Dec 11 14:29:52 compute-0 NetworkManager[56353]: <info>  [1765463392.8038] device (tap52f6df19-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:29:52 compute-0 ovn_controller[97832]: 2025-12-11T14:29:52Z|00093|binding|INFO|Releasing lport 52f6df19-5cbb-49e5-8051-125a414c0f9f from this chassis (sb_readonly=0)
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.822 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:52 compute-0 ovn_controller[97832]: 2025-12-11T14:29:52Z|00094|binding|INFO|Setting lport 52f6df19-5cbb-49e5-8051-125a414c0f9f down in Southbound
Dec 11 14:29:52 compute-0 ovn_controller[97832]: 2025-12-11T14:29:52Z|00095|binding|INFO|Removing iface tap52f6df19-5c ovn-installed in OVS
Dec 11 14:29:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:52.832 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c9:b5 10.100.0.8'], port_security=['fa:16:3e:26:c9:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81fb21e1-e42a-429c-bdb6-a671b908997f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e4b83c3ff8a49fb829dba1ec8a2121e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fd90c69-6fef-4c09-94ec-ce2f215b43eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f65c3ca-604c-4a31-a0d6-f4b05c29492f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=52f6df19-5cbb-49e5-8051-125a414c0f9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:29:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:52.834 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 52f6df19-5cbb-49e5-8051-125a414c0f9f in datapath 81fb21e1-e42a-429c-bdb6-a671b908997f unbound from our chassis#033[00m
Dec 11 14:29:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:52.838 106686 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81fb21e1-e42a-429c-bdb6-a671b908997f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.841 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:52.842 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f899677d-873b-4406-9853-e6d35385f360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:52.845 106686 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f namespace which is not needed anymore#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.856 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:52 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 11 14:29:52 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 42.522s CPU time.
Dec 11 14:29:52 compute-0 systemd-machined[155778]: Machine qemu-8-instance-00000008 terminated.
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.935 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.941 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.990 189444 INFO nova.virt.libvirt.driver [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance destroyed successfully.#033[00m
Dec 11 14:29:52 compute-0 nova_compute[189440]: 2025-12-11 14:29:52.991 189444 DEBUG nova.objects.instance [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'resources' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.007 189444 DEBUG nova.virt.libvirt.vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-841961376',display_name='tempest-ServerActionsTestJSON-server-841961376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-841961376',id=8,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4uuTromKvYazAi/ZcTswvYdpFQO/eOeQ0R7nGbb/Zq0OYhVFvcR4MV0lRBAAEY0tvtOkCbrPDklymzrDzA6JNjcl5/XMDAWsZbYP/ZSp/w8oqE1UIbRS8HSekXLExQxw==',key_name='tempest-keypair-991552200',keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:28:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3e4b83c3ff8a49fb829dba1ec8a2121e',ramdisk_id='',reservation_id='r-d24sbuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-954728080',owner_user_name='tempest-ServerActionsTestJSON-954728080-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:29:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fde21296346489db3133bd3ccf4e92f',uuid=c76d24aa-f7f9-49a6-b248-ab2d703c2930,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.008 189444 DEBUG nova.network.os_vif_util [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converting VIF {"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.009 189444 DEBUG nova.network.os_vif_util [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.010 189444 DEBUG os_vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.012 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.012 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52f6df19-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.014 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.018 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.020 189444 INFO os_vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c')#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.029 189444 DEBUG nova.virt.libvirt.driver [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Start _get_guest_xml network_info=[{"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '64e29581-a774-4784-b0cb-b4428b3222f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:29:53 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[252422]: [NOTICE]   (252426) : haproxy version is 2.8.14-c23fe91
Dec 11 14:29:53 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[252422]: [NOTICE]   (252426) : path to executable is /usr/sbin/haproxy
Dec 11 14:29:53 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[252422]: [ALERT]    (252426) : Current worker (252428) exited with code 143 (Terminated)
Dec 11 14:29:53 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[252422]: [WARNING]  (252426) : All workers exited. Exiting... (0)
Dec 11 14:29:53 compute-0 systemd[1]: libpod-822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14.scope: Deactivated successfully.
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.037 189444 WARNING nova.virt.libvirt.driver [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:29:53 compute-0 podman[253047]: 2025-12-11 14:29:53.041223653 +0000 UTC m=+0.080902677 container died 822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.045 189444 DEBUG nova.virt.libvirt.host [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.046 189444 DEBUG nova.virt.libvirt.host [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.053 189444 DEBUG nova.virt.libvirt.host [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.054 189444 DEBUG nova.virt.libvirt.host [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.055 189444 DEBUG nova.virt.libvirt.driver [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.055 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:25:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='639c6f85-2c0f-4003-98b6-94c63eeb9fc7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.056 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.056 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.056 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.057 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.057 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.057 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.058 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.058 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.059 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.059 189444 DEBUG nova.virt.hardware [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.060 189444 DEBUG nova.objects.instance [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'vcpu_model' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.081 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.143 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.145 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.145 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.146 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.148 189444 DEBUG nova.virt.libvirt.vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-841961376',display_name='tempest-ServerActionsTestJSON-server-841961376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-841961376',id=8,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4uuTromKvYazAi/ZcTswvYdpFQO/eOeQ0R7nGbb/Zq0OYhVFvcR4MV0lRBAAEY0tvtOkCbrPDklymzrDzA6JNjcl5/XMDAWsZbYP/ZSp/w8oqE1UIbRS8HSekXLExQxw==',key_name='tempest-keypair-991552200',keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:28:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3e4b83c3ff8a49fb829dba1ec8a2121e',ramdisk_id='',reservation_id='r-d24sbuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-954728080',owner_user_name='tempest-ServerActionsTestJSON-954728080-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:29:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fde21296346489db3133bd3ccf4e92f',uuid=c76d24aa-f7f9-49a6-b248-ab2d703c2930,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.148 189444 DEBUG nova.network.os_vif_util [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converting VIF {"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.150 189444 DEBUG nova.network.os_vif_util [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.151 189444 DEBUG nova.objects.instance [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'pci_devices' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.164 189444 DEBUG nova.virt.libvirt.driver [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <uuid>c76d24aa-f7f9-49a6-b248-ab2d703c2930</uuid>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <name>instance-00000008</name>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <memory>131072</memory>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:name>tempest-ServerActionsTestJSON-server-841961376</nova:name>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:29:53</nova:creationTime>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:flavor name="m1.nano">
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:memory>128</nova:memory>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:ephemeral>0</nova:ephemeral>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:user uuid="5fde21296346489db3133bd3ccf4e92f">tempest-ServerActionsTestJSON-954728080-project-member</nova:user>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:project uuid="3e4b83c3ff8a49fb829dba1ec8a2121e">tempest-ServerActionsTestJSON-954728080</nova:project>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="64e29581-a774-4784-b0cb-b4428b3222f4"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        <nova:port uuid="52f6df19-5cbb-49e5-8051-125a414c0f9f">
Dec 11 14:29:53 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <system>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <entry name="serial">c76d24aa-f7f9-49a6-b248-ab2d703c2930</entry>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <entry name="uuid">c76d24aa-f7f9-49a6-b248-ab2d703c2930</entry>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </system>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <os>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </os>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <features>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </features>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk.config"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:26:c9:b5"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <target dev="tap52f6df19-5c"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/console.log" append="off"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <video>
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </video>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <input type="keyboard" bus="usb"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:29:53 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:29:53 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:29:53 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:29:53 compute-0 nova_compute[189440]: </domain>
Dec 11 14:29:53 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.165 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14-userdata-shm.mount: Deactivated successfully.
Dec 11 14:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5776530dd1f93ea8275fb2e29c92965869b28e15da146400118acc05b1ef236c-merged.mount: Deactivated successfully.
Dec 11 14:29:53 compute-0 podman[253047]: 2025-12-11 14:29:53.18329727 +0000 UTC m=+0.222976294 container cleanup 822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:29:53 compute-0 systemd[1]: libpod-conmon-822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14.scope: Deactivated successfully.
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.237 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.238 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.311 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.320 189444 DEBUG nova.objects.instance [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'trusted_certs' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.341 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.413 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.415 189444 DEBUG nova.virt.disk.api [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Checking if we can resize image /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.415 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:29:53 compute-0 podman[253085]: 2025-12-11 14:29:53.47830807 +0000 UTC m=+0.269511175 container remove 822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.481 189444 DEBUG oslo_concurrency.processutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.484 189444 DEBUG nova.virt.disk.api [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Cannot resize image /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.486 189444 DEBUG nova.objects.instance [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'migration_context' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.494 189444 DEBUG nova.compute.manager [req-8d057bb8-f13b-44a7-84f4-ac42e6a077c8 req-78d3e9b6-1180-4ed0-84af-9b8005d3f972 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-unplugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.494 189444 DEBUG oslo_concurrency.lockutils [req-8d057bb8-f13b-44a7-84f4-ac42e6a077c8 req-78d3e9b6-1180-4ed0-84af-9b8005d3f972 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.494 189444 DEBUG oslo_concurrency.lockutils [req-8d057bb8-f13b-44a7-84f4-ac42e6a077c8 req-78d3e9b6-1180-4ed0-84af-9b8005d3f972 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.495 189444 DEBUG oslo_concurrency.lockutils [req-8d057bb8-f13b-44a7-84f4-ac42e6a077c8 req-78d3e9b6-1180-4ed0-84af-9b8005d3f972 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.495 189444 DEBUG nova.compute.manager [req-8d057bb8-f13b-44a7-84f4-ac42e6a077c8 req-78d3e9b6-1180-4ed0-84af-9b8005d3f972 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-unplugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.495 189444 WARNING nova.compute.manager [req-8d057bb8-f13b-44a7-84f4-ac42e6a077c8 req-78d3e9b6-1180-4ed0-84af-9b8005d3f972 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received unexpected event network-vif-unplugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.497 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd16872-3827-43c1-bf95-4fbbb53b2077]: (4, ('Thu Dec 11 02:29:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f (822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14)\n822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14\nThu Dec 11 02:29:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f (822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14)\n822fa8a2d3513c53b9086b1856b4d04b39ab9a84c5b72a5177bf6f3671b95e14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.499 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[9d605789-6607-4f8c-af70-f7755523910f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.500 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81fb21e1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.503 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 kernel: tap81fb21e1-e0: left promiscuous mode
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.508 189444 DEBUG nova.virt.libvirt.vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-841961376',display_name='tempest-ServerActionsTestJSON-server-841961376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-841961376',id=8,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4uuTromKvYazAi/ZcTswvYdpFQO/eOeQ0R7nGbb/Zq0OYhVFvcR4MV0lRBAAEY0tvtOkCbrPDklymzrDzA6JNjcl5/XMDAWsZbYP/ZSp/w8oqE1UIbRS8HSekXLExQxw==',key_name='tempest-keypair-991552200',keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:28:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3e4b83c3ff8a49fb829dba1ec8a2121e',ramdisk_id='',reservation_id='r-d24sbuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-954728080',owner_user_name='tempest-ServerActionsTestJSON-954728080-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:29:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fde21296346489db3133bd3ccf4e92f',uuid=c76d24aa-f7f9-49a6-b248-ab2d703c2930,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.508 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[c47da77e-2a9a-4d38-9585-25388e3162e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.509 189444 DEBUG nova.network.os_vif_util [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converting VIF {"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.510 189444 DEBUG nova.network.os_vif_util [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.511 189444 DEBUG os_vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.511 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.511 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.512 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.518 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.518 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f6df19-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.519 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f6df19-5c, col_values=(('external_ids', {'iface-id': '52f6df19-5cbb-49e5-8051-125a414c0f9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:c9:b5', 'vm-uuid': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.521 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 NetworkManager[56353]: <info>  [1765463393.5215] manager: (tap52f6df19-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.522 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.527 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[25cf962e-e2e9-4481-9efd-70a16285f3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.528 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[98b2cb7a-f4dd-4550-b1ff-af3dbff04225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.530 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.532 189444 INFO os_vif [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c')#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.546 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[d42748c6-6394-45ab-a9c0-fb013f198cb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537734, 'reachable_time': 31766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253112, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d81fb21e1\x2de42a\x2d429c\x2dbdb6\x2da671b908997f.mount: Deactivated successfully.
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.551 106799 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.552 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[3526cc0a-277f-447a-8c21-1e1e7cde81c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 kernel: tap52f6df19-5c: entered promiscuous mode
Dec 11 14:29:53 compute-0 NetworkManager[56353]: <info>  [1765463393.6048] manager: (tap52f6df19-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec 11 14:29:53 compute-0 systemd-udevd[253022]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:29:53 compute-0 ovn_controller[97832]: 2025-12-11T14:29:53Z|00096|binding|INFO|Claiming lport 52f6df19-5cbb-49e5-8051-125a414c0f9f for this chassis.
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.606 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 ovn_controller[97832]: 2025-12-11T14:29:53Z|00097|binding|INFO|52f6df19-5cbb-49e5-8051-125a414c0f9f: Claiming fa:16:3e:26:c9:b5 10.100.0.8
Dec 11 14:29:53 compute-0 NetworkManager[56353]: <info>  [1765463393.6189] device (tap52f6df19-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:29:53 compute-0 NetworkManager[56353]: <info>  [1765463393.6195] device (tap52f6df19-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:29:53 compute-0 ovn_controller[97832]: 2025-12-11T14:29:53Z|00098|binding|INFO|Setting lport 52f6df19-5cbb-49e5-8051-125a414c0f9f ovn-installed in OVS
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.624 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 nova_compute[189440]: 2025-12-11 14:29:53.626 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:53 compute-0 systemd-machined[155778]: New machine qemu-9-instance-00000008.
Dec 11 14:29:53 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000008.
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.801 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c9:b5 10.100.0.8'], port_security=['fa:16:3e:26:c9:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81fb21e1-e42a-429c-bdb6-a671b908997f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e4b83c3ff8a49fb829dba1ec8a2121e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0fd90c69-6fef-4c09-94ec-ce2f215b43eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f65c3ca-604c-4a31-a0d6-f4b05c29492f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=52f6df19-5cbb-49e5-8051-125a414c0f9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.804 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 52f6df19-5cbb-49e5-8051-125a414c0f9f in datapath 81fb21e1-e42a-429c-bdb6-a671b908997f bound to our chassis#033[00m
Dec 11 14:29:53 compute-0 ovn_controller[97832]: 2025-12-11T14:29:53Z|00099|binding|INFO|Setting lport 52f6df19-5cbb-49e5-8051-125a414c0f9f up in Southbound
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.814 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81fb21e1-e42a-429c-bdb6-a671b908997f#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.834 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a5095baf-40b2-4f41-a219-d037378d2efc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.837 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81fb21e1-e1 in ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.840 239832 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81fb21e1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.840 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f9088fe6-9106-43fd-bd0b-453e20af3d5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.843 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7d842b72-e550-490f-9d3f-737738c50396]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.865 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[84ef3d5d-795e-42fb-a9d4-b89fe4285026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.902 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7cc2f8-db29-4c4f-bb92-04a8b2f59327]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.947 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[76ca3c51-ced5-4a06-bfbc-438c704e83ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:53 compute-0 NetworkManager[56353]: <info>  [1765463393.9641] manager: (tap81fb21e1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec 11 14:29:53 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:53.965 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[47eb33fe-c135-44c3-a94e-2815e5b744f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.008 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[3b614f46-a116-4ac5-b474-62242a192290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.011 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[40e6cc05-39f4-4a94-8f55-1498f7191d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 NetworkManager[56353]: <info>  [1765463394.0345] device (tap81fb21e1-e0): carrier: link connected
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.039 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[40b04237-d22f-488a-8699-38e664cbd51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.062 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[770b72fa-51ee-4d9b-8c98-50ac375f6e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81fb21e1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:97:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545436, 'reachable_time': 32875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253163, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.077 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac328a2-5751-459f-a00a-c1b7c47cb21f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:97e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545436, 'tstamp': 545436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253168, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.097 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6ffb3d-586b-4dcb-84b4-85b7904c5907]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81fb21e1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:97:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545436, 'reachable_time': 32875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253169, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.129 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[6188ac58-eff9-4d40-a9a8-eca26ad28878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.183 189444 DEBUG nova.virt.libvirt.host [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Removed pending event for c76d24aa-f7f9-49a6-b248-ab2d703c2930 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.183 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463394.181743, c76d24aa-f7f9-49a6-b248-ab2d703c2930 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.184 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.186 189444 DEBUG nova.compute.manager [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.190 189444 INFO nova.virt.libvirt.driver [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance rebooted successfully.#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.191 189444 DEBUG nova.compute.manager [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.196 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[400ee0a7-a80e-4435-acf2-536df852e0f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.199 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81fb21e1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.199 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.200 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81fb21e1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:54 compute-0 NetworkManager[56353]: <info>  [1765463394.2033] manager: (tap81fb21e1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec 11 14:29:54 compute-0 kernel: tap81fb21e1-e0: entered promiscuous mode
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.203 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.207 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81fb21e1-e0, col_values=(('external_ids', {'iface-id': '0c7654b9-d19e-4dbf-aa95-fd31082835ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:29:54 compute-0 ovn_controller[97832]: 2025-12-11T14:29:54Z|00100|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.208 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.220 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.221 106686 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81fb21e1-e42a-429c-bdb6-a671b908997f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81fb21e1-e42a-429c-bdb6-a671b908997f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.223 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[493ab982-2f45-4fb7-acd4-ed73dc80c91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.224 106686 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: global
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    log         /dev/log local0 debug
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    log-tag     haproxy-metadata-proxy-81fb21e1-e42a-429c-bdb6-a671b908997f
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    user        root
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    group       root
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    maxconn     1024
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    pidfile     /var/lib/neutron/external/pids/81fb21e1-e42a-429c-bdb6-a671b908997f.pid.haproxy
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    daemon
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: defaults
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    log global
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    mode http
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    option httplog
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    option dontlognull
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    option http-server-close
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    option forwardfor
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    retries                 3
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    timeout http-request    30s
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    timeout connect         30s
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    timeout client          32s
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    timeout server          32s
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    timeout http-keep-alive 30s
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: listen listener
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    bind 169.254.169.254:80
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    server metadata /var/lib/neutron/metadata_proxy
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]:    http-request add-header X-OVN-Network-ID 81fb21e1-e42a-429c-bdb6-a671b908997f
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 11 14:29:54 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:54.225 106686 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'env', 'PROCESS_TAG=haproxy-81fb21e1-e42a-429c-bdb6-a671b908997f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81fb21e1-e42a-429c-bdb6-a671b908997f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.234 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.239 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.270 189444 DEBUG oslo_concurrency.lockutils [None req-9596e824-e92a-4113-a362-bd63bd55156f 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.274 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.274 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463394.1826906, c76d24aa-f7f9-49a6-b248-ab2d703c2930 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.274 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] VM Started (Lifecycle Event)#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.303 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.309 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:29:54 compute-0 podman[253202]: 2025-12-11 14:29:54.591257505 +0000 UTC m=+0.025088787 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 14:29:54 compute-0 podman[253202]: 2025-12-11 14:29:54.690298926 +0000 UTC m=+0.124130188 container create 014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 11 14:29:54 compute-0 ovn_controller[97832]: 2025-12-11T14:29:54Z|00101|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:29:54 compute-0 ovn_controller[97832]: 2025-12-11T14:29:54Z|00102|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:29:54 compute-0 ovn_controller[97832]: 2025-12-11T14:29:54Z|00103|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:29:54 compute-0 nova_compute[189440]: 2025-12-11 14:29:54.910 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:54 compute-0 systemd[1]: Started libpod-conmon-014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2.scope.
Dec 11 14:29:54 compute-0 systemd[1]: Started libcrun container.
Dec 11 14:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc214091070956c758046c66acf474ea115271b5d66839fd028c6327f1d606db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 14:29:54 compute-0 podman[253202]: 2025-12-11 14:29:54.988127295 +0000 UTC m=+0.421958587 container init 014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 11 14:29:55 compute-0 podman[253202]: 2025-12-11 14:29:55.00016781 +0000 UTC m=+0.433999092 container start 014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 14:29:55 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [NOTICE]   (253222) : New worker (253224) forked
Dec 11 14:29:55 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [NOTICE]   (253222) : Loading success.
Dec 11 14:29:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:55.212 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.214 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.567 189444 DEBUG nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.568 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.568 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.569 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.569 189444 DEBUG nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.569 189444 WARNING nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received unexpected event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with vm_state active and task_state None.#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.569 189444 DEBUG nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.570 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.575 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.575 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.576 189444 DEBUG nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.576 189444 WARNING nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received unexpected event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with vm_state active and task_state None.#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.576 189444 DEBUG nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.576 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.576 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.577 189444 DEBUG oslo_concurrency.lockutils [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.577 189444 DEBUG nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:29:55 compute-0 nova_compute[189440]: 2025-12-11 14:29:55.577 189444 WARNING nova.compute.manager [req-f66accd0-2938-4168-ab71-baca3b948fca req-abdd21ac-423f-47c3-a148-b73220c384b7 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received unexpected event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with vm_state active and task_state None.#033[00m
Dec 11 14:29:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:29:55.710 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:29:56 compute-0 nova_compute[189440]: 2025-12-11 14:29:56.157 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:58 compute-0 nova_compute[189440]: 2025-12-11 14:29:58.522 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:29:59 compute-0 podman[203650]: time="2025-12-11T14:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:29:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:29:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5724 "" "Go-http-client/1.1"
Dec 11 14:30:01 compute-0 nova_compute[189440]: 2025-12-11 14:30:01.160 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: ERROR   14:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: ERROR   14:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: ERROR   14:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: ERROR   14:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: ERROR   14:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:30:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:30:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:30:01.712 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:30:03 compute-0 nova_compute[189440]: 2025-12-11 14:30:03.526 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:30:04.112 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:30:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:30:04.114 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:30:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:30:04.116 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:30:05 compute-0 nova_compute[189440]: 2025-12-11 14:30:05.768 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:06 compute-0 nova_compute[189440]: 2025-12-11 14:30:06.163 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:06 compute-0 podman[253233]: 2025-12-11 14:30:06.500819558 +0000 UTC m=+0.100611990 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:30:06 compute-0 podman[253234]: 2025-12-11 14:30:06.51474099 +0000 UTC m=+0.114171993 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:30:07 compute-0 nova_compute[189440]: 2025-12-11 14:30:07.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:08 compute-0 nova_compute[189440]: 2025-12-11 14:30:08.530 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:10 compute-0 nova_compute[189440]: 2025-12-11 14:30:10.245 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:11 compute-0 nova_compute[189440]: 2025-12-11 14:30:11.165 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:11 compute-0 nova_compute[189440]: 2025-12-11 14:30:11.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:11 compute-0 nova_compute[189440]: 2025-12-11 14:30:11.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:30:12 compute-0 nova_compute[189440]: 2025-12-11 14:30:12.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:12 compute-0 podman[253279]: 2025-12-11 14:30:12.508916943 +0000 UTC m=+0.097935304 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:30:12 compute-0 podman[253278]: 2025-12-11 14:30:12.53076861 +0000 UTC m=+0.112709668 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 11 14:30:12 compute-0 podman[253277]: 2025-12-11 14:30:12.532369199 +0000 UTC m=+0.129893579 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 11 14:30:12 compute-0 podman[253280]: 2025-12-11 14:30:12.556992044 +0000 UTC m=+0.132070053 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:30:13 compute-0 nova_compute[189440]: 2025-12-11 14:30:13.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:13 compute-0 nova_compute[189440]: 2025-12-11 14:30:13.534 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.168 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.759 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.760 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.760 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:30:16 compute-0 nova_compute[189440]: 2025-12-11 14:30:16.761 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:30:17 compute-0 nova_compute[189440]: 2025-12-11 14:30:17.961 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updating instance_info_cache with network_info: [{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:30:17 compute-0 nova_compute[189440]: 2025-12-11 14:30:17.976 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:30:17 compute-0 nova_compute[189440]: 2025-12-11 14:30:17.976 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:30:18 compute-0 nova_compute[189440]: 2025-12-11 14:30:18.537 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:19 compute-0 podman[253348]: 2025-12-11 14:30:19.561147795 +0000 UTC m=+0.155660881 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 11 14:30:20 compute-0 nova_compute[189440]: 2025-12-11 14:30:20.972 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.171 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.390 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.391 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.392 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.392 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.710 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.796 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.796 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.863 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.871 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.934 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.935 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:30:21 compute-0 nova_compute[189440]: 2025-12-11 14:30:21.993 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.002 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.078 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.079 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.137 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.506 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.508 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=72.23974609375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.508 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.509 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.710 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.710 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.711 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance c76d24aa-f7f9-49a6-b248-ab2d703c2930 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.711 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.712 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.779 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.799 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.801 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:30:22 compute-0 nova_compute[189440]: 2025-12-11 14:30:22.801 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:30:23 compute-0 podman[253394]: 2025-12-11 14:30:23.488592115 +0000 UTC m=+0.082987237 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc.)
Dec 11 14:30:23 compute-0 podman[253395]: 2025-12-11 14:30:23.49202921 +0000 UTC m=+0.073354462 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:30:23 compute-0 nova_compute[189440]: 2025-12-11 14:30:23.541 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:23 compute-0 nova_compute[189440]: 2025-12-11 14:30:23.801 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:30:26 compute-0 nova_compute[189440]: 2025-12-11 14:30:26.175 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:26 compute-0 ovn_controller[97832]: 2025-12-11T14:30:26Z|00104|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:30:26 compute-0 ovn_controller[97832]: 2025-12-11T14:30:26Z|00105|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:30:26 compute-0 ovn_controller[97832]: 2025-12-11T14:30:26Z|00106|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:30:27 compute-0 nova_compute[189440]: 2025-12-11 14:30:27.049 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:28 compute-0 nova_compute[189440]: 2025-12-11 14:30:28.546 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:29 compute-0 podman[203650]: time="2025-12-11T14:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:30:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:30:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5724 "" "Go-http-client/1.1"
Dec 11 14:30:31 compute-0 nova_compute[189440]: 2025-12-11 14:30:31.179 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: ERROR   14:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: ERROR   14:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: ERROR   14:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: ERROR   14:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: ERROR   14:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:30:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:30:31 compute-0 ovn_controller[97832]: 2025-12-11T14:30:31Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:c9:b5 10.100.0.8
Dec 11 14:30:33 compute-0 nova_compute[189440]: 2025-12-11 14:30:33.551 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:36 compute-0 nova_compute[189440]: 2025-12-11 14:30:36.183 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:37 compute-0 podman[253442]: 2025-12-11 14:30:37.089963967 +0000 UTC m=+0.091223710 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:30:37 compute-0 podman[253443]: 2025-12-11 14:30:37.099132092 +0000 UTC m=+0.091108347 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:30:38 compute-0 nova_compute[189440]: 2025-12-11 14:30:38.554 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:41 compute-0 nova_compute[189440]: 2025-12-11 14:30:41.185 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:43 compute-0 podman[253482]: 2025-12-11 14:30:43.499272756 +0000 UTC m=+0.088441992 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 11 14:30:43 compute-0 podman[253483]: 2025-12-11 14:30:43.512298815 +0000 UTC m=+0.099358089 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release-0.7.12=, io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=)
Dec 11 14:30:43 compute-0 podman[253484]: 2025-12-11 14:30:43.532167933 +0000 UTC m=+0.122507598 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 11 14:30:43 compute-0 podman[253485]: 2025-12-11 14:30:43.549271393 +0000 UTC m=+0.130069743 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 11 14:30:43 compute-0 nova_compute[189440]: 2025-12-11 14:30:43.559 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:44 compute-0 nova_compute[189440]: 2025-12-11 14:30:44.304 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:46 compute-0 nova_compute[189440]: 2025-12-11 14:30:46.188 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:48 compute-0 nova_compute[189440]: 2025-12-11 14:30:48.563 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:50 compute-0 nova_compute[189440]: 2025-12-11 14:30:50.266 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:50 compute-0 podman[253567]: 2025-12-11 14:30:50.547586078 +0000 UTC m=+0.134688956 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:30:51 compute-0 nova_compute[189440]: 2025-12-11 14:30:51.191 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:53 compute-0 nova_compute[189440]: 2025-12-11 14:30:53.566 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:54 compute-0 podman[253594]: 2025-12-11 14:30:54.470640191 +0000 UTC m=+0.067436256 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:30:54 compute-0 podman[253593]: 2025-12-11 14:30:54.481100287 +0000 UTC m=+0.083982992 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 11 14:30:55 compute-0 nova_compute[189440]: 2025-12-11 14:30:55.893 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:30:55.893 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:30:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:30:55.895 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:30:56 compute-0 nova_compute[189440]: 2025-12-11 14:30:56.194 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:56 compute-0 ovn_controller[97832]: 2025-12-11T14:30:56Z|00107|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:30:56 compute-0 ovn_controller[97832]: 2025-12-11T14:30:56Z|00108|binding|INFO|Releasing lport 0c7654b9-d19e-4dbf-aa95-fd31082835ab from this chassis (sb_readonly=0)
Dec 11 14:30:56 compute-0 ovn_controller[97832]: 2025-12-11T14:30:56Z|00109|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:30:56 compute-0 nova_compute[189440]: 2025-12-11 14:30:56.493 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:58 compute-0 nova_compute[189440]: 2025-12-11 14:30:58.569 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:30:59 compute-0 podman[203650]: time="2025-12-11T14:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:30:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:30:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5728 "" "Go-http-client/1.1"
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.851 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.852 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.852 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.853 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.854 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.856 189444 INFO nova.compute.manager [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Terminating instance#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.858 189444 DEBUG nova.compute.manager [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:31:00 compute-0 kernel: tap52f6df19-5c (unregistering): left promiscuous mode
Dec 11 14:31:00 compute-0 NetworkManager[56353]: <info>  [1765463460.8984] device (tap52f6df19-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:31:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:00.899 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:31:00 compute-0 ovn_controller[97832]: 2025-12-11T14:31:00Z|00110|binding|INFO|Releasing lport 52f6df19-5cbb-49e5-8051-125a414c0f9f from this chassis (sb_readonly=0)
Dec 11 14:31:00 compute-0 ovn_controller[97832]: 2025-12-11T14:31:00Z|00111|binding|INFO|Setting lport 52f6df19-5cbb-49e5-8051-125a414c0f9f down in Southbound
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.911 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:00 compute-0 ovn_controller[97832]: 2025-12-11T14:31:00Z|00112|binding|INFO|Removing iface tap52f6df19-5c ovn-installed in OVS
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.915 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:00.922 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c9:b5 10.100.0.8'], port_security=['fa:16:3e:26:c9:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c76d24aa-f7f9-49a6-b248-ab2d703c2930', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81fb21e1-e42a-429c-bdb6-a671b908997f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e4b83c3ff8a49fb829dba1ec8a2121e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0fd90c69-6fef-4c09-94ec-ce2f215b43eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f65c3ca-604c-4a31-a0d6-f4b05c29492f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=52f6df19-5cbb-49e5-8051-125a414c0f9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:31:00 compute-0 nova_compute[189440]: 2025-12-11 14:31:00.924 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:00.925 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 52f6df19-5cbb-49e5-8051-125a414c0f9f in datapath 81fb21e1-e42a-429c-bdb6-a671b908997f unbound from our chassis#033[00m
Dec 11 14:31:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:00.929 106686 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81fb21e1-e42a-429c-bdb6-a671b908997f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 11 14:31:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:00.931 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb37e5a-3d30-47d2-b638-d88418ce78ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:00 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:00.932 106686 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f namespace which is not needed anymore#033[00m
Dec 11 14:31:00 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 11 14:31:00 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Consumed 42.877s CPU time.
Dec 11 14:31:00 compute-0 systemd-machined[155778]: Machine qemu-9-instance-00000008 terminated.
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.090 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.097 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.137 189444 INFO nova.virt.libvirt.driver [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Instance destroyed successfully.#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.138 189444 DEBUG nova.objects.instance [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lazy-loading 'resources' on Instance uuid c76d24aa-f7f9-49a6-b248-ab2d703c2930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.197 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.257 189444 DEBUG nova.virt.libvirt.vif [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-841961376',display_name='tempest-ServerActionsTestJSON-server-841961376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-841961376',id=8,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4uuTromKvYazAi/ZcTswvYdpFQO/eOeQ0R7nGbb/Zq0OYhVFvcR4MV0lRBAAEY0tvtOkCbrPDklymzrDzA6JNjcl5/XMDAWsZbYP/ZSp/w8oqE1UIbRS8HSekXLExQxw==',key_name='tempest-keypair-991552200',keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:28:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3e4b83c3ff8a49fb829dba1ec8a2121e',ramdisk_id='',reservation_id='r-d24sbuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-954728080',owner_user_name='tempest-ServerActionsTestJSON-954728080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:29:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fde21296346489db3133bd3ccf4e92f',uuid=c76d24aa-f7f9-49a6-b248-ab2d703c2930,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.257 189444 DEBUG nova.network.os_vif_util [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converting VIF {"id": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "address": "fa:16:3e:26:c9:b5", "network": {"id": "81fb21e1-e42a-429c-bdb6-a671b908997f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-543415014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e4b83c3ff8a49fb829dba1ec8a2121e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f6df19-5c", "ovs_interfaceid": "52f6df19-5cbb-49e5-8051-125a414c0f9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.258 189444 DEBUG nova.network.os_vif_util [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.259 189444 DEBUG os_vif [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.261 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.261 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52f6df19-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.264 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.266 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.271 189444 INFO os_vif [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c9:b5,bridge_name='br-int',has_traffic_filtering=True,id=52f6df19-5cbb-49e5-8051-125a414c0f9f,network=Network(81fb21e1-e42a-429c-bdb6-a671b908997f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f6df19-5c')#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.272 189444 INFO nova.virt.libvirt.driver [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Deleting instance files /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930_del#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.272 189444 INFO nova.virt.libvirt.driver [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Deletion of /var/lib/nova/instances/c76d24aa-f7f9-49a6-b248-ab2d703c2930_del complete#033[00m
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: ERROR   14:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: ERROR   14:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: ERROR   14:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: ERROR   14:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: ERROR   14:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:31:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.623 189444 DEBUG nova.compute.manager [req-f30e090e-ecd4-4c17-a7fb-1826e4334260 req-8c042a56-cc84-4e9e-903e-fa1109e5fe00 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-unplugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.623 189444 DEBUG oslo_concurrency.lockutils [req-f30e090e-ecd4-4c17-a7fb-1826e4334260 req-8c042a56-cc84-4e9e-903e-fa1109e5fe00 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.624 189444 DEBUG oslo_concurrency.lockutils [req-f30e090e-ecd4-4c17-a7fb-1826e4334260 req-8c042a56-cc84-4e9e-903e-fa1109e5fe00 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.625 189444 DEBUG oslo_concurrency.lockutils [req-f30e090e-ecd4-4c17-a7fb-1826e4334260 req-8c042a56-cc84-4e9e-903e-fa1109e5fe00 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.625 189444 DEBUG nova.compute.manager [req-f30e090e-ecd4-4c17-a7fb-1826e4334260 req-8c042a56-cc84-4e9e-903e-fa1109e5fe00 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-unplugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.626 189444 DEBUG nova.compute.manager [req-f30e090e-ecd4-4c17-a7fb-1826e4334260 req-8c042a56-cc84-4e9e-903e-fa1109e5fe00 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-unplugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.636 189444 INFO nova.compute.manager [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.637 189444 DEBUG oslo.service.loopingcall [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.637 189444 DEBUG nova.compute.manager [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:31:01 compute-0 nova_compute[189440]: 2025-12-11 14:31:01.637 189444 DEBUG nova.network.neutron [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:31:01 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [NOTICE]   (253222) : haproxy version is 2.8.14-c23fe91
Dec 11 14:31:01 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [NOTICE]   (253222) : path to executable is /usr/sbin/haproxy
Dec 11 14:31:01 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [WARNING]  (253222) : Exiting Master process...
Dec 11 14:31:01 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [ALERT]    (253222) : Current worker (253224) exited with code 143 (Terminated)
Dec 11 14:31:01 compute-0 neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f[253218]: [WARNING]  (253222) : All workers exited. Exiting... (0)
Dec 11 14:31:01 compute-0 systemd[1]: libpod-014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2.scope: Deactivated successfully.
Dec 11 14:31:01 compute-0 podman[253662]: 2025-12-11 14:31:01.995937443 +0000 UTC m=+0.938130656 container died 014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 14:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2-userdata-shm.mount: Deactivated successfully.
Dec 11 14:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc214091070956c758046c66acf474ea115271b5d66839fd028c6327f1d606db-merged.mount: Deactivated successfully.
Dec 11 14:31:02 compute-0 podman[253662]: 2025-12-11 14:31:02.607120912 +0000 UTC m=+1.549314115 container cleanup 014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 11 14:31:02 compute-0 systemd[1]: libpod-conmon-014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2.scope: Deactivated successfully.
Dec 11 14:31:02 compute-0 podman[253707]: 2025-12-11 14:31:02.690064238 +0000 UTC m=+0.053776311 container remove 014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.697 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[cf993610-4ad9-4ae5-abe5-542b957dbe4f]: (4, ('Thu Dec 11 02:31:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f (014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2)\n014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2\nThu Dec 11 02:31:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f (014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2)\n014b5df8467289abb38bb9fc589022b857ababad4c517ffd51fffd2d225f66c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.701 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fc71ef-9fbf-447b-996c-f37ee6ed389e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.702 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81fb21e1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:31:02 compute-0 nova_compute[189440]: 2025-12-11 14:31:02.704 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:02 compute-0 kernel: tap81fb21e1-e0: left promiscuous mode
Dec 11 14:31:02 compute-0 nova_compute[189440]: 2025-12-11 14:31:02.707 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.709 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6dad15-d543-4c13-b245-f650b331e0e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:02 compute-0 nova_compute[189440]: 2025-12-11 14:31:02.722 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.729 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a761e95a-d34b-4025-8917-4d90586bf3cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.730 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3a5c9f-aa89-4d6a-87a6-b580071fd8d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.745 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[c8532ee4-6e06-4b3d-9d5a-299b7ba083eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545426, 'reachable_time': 30890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253721, 'error': None, 'target': 'ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d81fb21e1\x2de42a\x2d429c\x2dbdb6\x2da671b908997f.mount: Deactivated successfully.
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.749 106799 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81fb21e1-e42a-429c-bdb6-a671b908997f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 11 14:31:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:02.749 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[753ea6a1-e498-44dd-93a1-5db3f1370f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:31:03 compute-0 nova_compute[189440]: 2025-12-11 14:31:03.904 189444 DEBUG nova.compute.manager [req-1c060e66-f68a-4ad5-a88b-9c84c9e68f6d req-ea443aea-df6b-4741-b0c6-54a2da33a659 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:31:03 compute-0 nova_compute[189440]: 2025-12-11 14:31:03.906 189444 DEBUG oslo_concurrency.lockutils [req-1c060e66-f68a-4ad5-a88b-9c84c9e68f6d req-ea443aea-df6b-4741-b0c6-54a2da33a659 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:03 compute-0 nova_compute[189440]: 2025-12-11 14:31:03.906 189444 DEBUG oslo_concurrency.lockutils [req-1c060e66-f68a-4ad5-a88b-9c84c9e68f6d req-ea443aea-df6b-4741-b0c6-54a2da33a659 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:03 compute-0 nova_compute[189440]: 2025-12-11 14:31:03.907 189444 DEBUG oslo_concurrency.lockutils [req-1c060e66-f68a-4ad5-a88b-9c84c9e68f6d req-ea443aea-df6b-4741-b0c6-54a2da33a659 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:03 compute-0 nova_compute[189440]: 2025-12-11 14:31:03.907 189444 DEBUG nova.compute.manager [req-1c060e66-f68a-4ad5-a88b-9c84c9e68f6d req-ea443aea-df6b-4741-b0c6-54a2da33a659 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] No waiting events found dispatching network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:31:03 compute-0 nova_compute[189440]: 2025-12-11 14:31:03.907 189444 WARNING nova.compute.manager [req-1c060e66-f68a-4ad5-a88b-9c84c9e68f6d req-ea443aea-df6b-4741-b0c6-54a2da33a659 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received unexpected event network-vif-plugged-52f6df19-5cbb-49e5-8051-125a414c0f9f for instance with vm_state active and task_state deleting.#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.036 189444 DEBUG nova.network.neutron [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.099 189444 INFO nova.compute.manager [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Took 2.46 seconds to deallocate network for instance.#033[00m
Dec 11 14:31:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:04.113 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:04.113 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:31:04.115 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.194 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.195 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.310 189444 DEBUG nova.compute.provider_tree [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.358 189444 DEBUG nova.scheduler.client.report [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.418 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:04 compute-0 nova_compute[189440]: 2025-12-11 14:31:04.482 189444 INFO nova.scheduler.client.report [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Deleted allocations for instance c76d24aa-f7f9-49a6-b248-ab2d703c2930#033[00m
Dec 11 14:31:05 compute-0 nova_compute[189440]: 2025-12-11 14:31:05.131 189444 DEBUG oslo_concurrency.lockutils [None req-e0d725b7-9ffa-4cf3-bd8e-3c6a952139d6 5fde21296346489db3133bd3ccf4e92f 3e4b83c3ff8a49fb829dba1ec8a2121e - - default default] Lock "c76d24aa-f7f9-49a6-b248-ab2d703c2930" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:06 compute-0 nova_compute[189440]: 2025-12-11 14:31:06.143 189444 DEBUG nova.compute.manager [req-efb80325-ab05-4a34-995c-976eb6b1980a req-d9e550a7-f21b-4bd9-8519-e538617f0d9c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Received event network-vif-deleted-52f6df19-5cbb-49e5-8051-125a414c0f9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:31:06 compute-0 nova_compute[189440]: 2025-12-11 14:31:06.198 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:06 compute-0 nova_compute[189440]: 2025-12-11 14:31:06.264 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:07 compute-0 nova_compute[189440]: 2025-12-11 14:31:07.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:07 compute-0 podman[253723]: 2025-12-11 14:31:07.467460559 +0000 UTC m=+0.058384558 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:31:07 compute-0 podman[253722]: 2025-12-11 14:31:07.474840065 +0000 UTC m=+0.070790423 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 11 14:31:11 compute-0 nova_compute[189440]: 2025-12-11 14:31:11.200 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:11 compute-0 nova_compute[189440]: 2025-12-11 14:31:11.267 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:13 compute-0 nova_compute[189440]: 2025-12-11 14:31:13.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:13 compute-0 nova_compute[189440]: 2025-12-11 14:31:13.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:13 compute-0 nova_compute[189440]: 2025-12-11 14:31:13.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:31:14 compute-0 nova_compute[189440]: 2025-12-11 14:31:14.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:14 compute-0 podman[253767]: 2025-12-11 14:31:14.537406516 +0000 UTC m=+0.113448316 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 11 14:31:14 compute-0 podman[253768]: 2025-12-11 14:31:14.539137607 +0000 UTC m=+0.104982855 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, vcs-type=git, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.tags=base rhel9)
Dec 11 14:31:14 compute-0 podman[253769]: 2025-12-11 14:31:14.556575632 +0000 UTC m=+0.123632829 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 11 14:31:14 compute-0 podman[253770]: 2025-12-11 14:31:14.561696853 +0000 UTC m=+0.125258407 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.134 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765463461.131999, c76d24aa-f7f9-49a6-b248-ab2d703c2930 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.135 189444 INFO nova.compute.manager [-] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.178 189444 DEBUG nova.compute.manager [None req-873404c0-17b4-4d71-89e5-80a043c88faf - - - - - -] [instance: c76d24aa-f7f9-49a6-b248-ab2d703c2930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.204 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.269 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.532 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.533 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:31:16 compute-0 nova_compute[189440]: 2025-12-11 14:31:16.533 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:31:19 compute-0 nova_compute[189440]: 2025-12-11 14:31:19.253 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updating instance_info_cache with network_info: [{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:31:19 compute-0 ovn_controller[97832]: 2025-12-11T14:31:19Z|00113|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:31:19 compute-0 ovn_controller[97832]: 2025-12-11T14:31:19Z|00114|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:31:19 compute-0 nova_compute[189440]: 2025-12-11 14:31:19.336 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:31:19 compute-0 nova_compute[189440]: 2025-12-11 14:31:19.337 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:31:19 compute-0 nova_compute[189440]: 2025-12-11 14:31:19.340 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:19 compute-0 ovn_controller[97832]: 2025-12-11T14:31:19Z|00115|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:31:19 compute-0 ovn_controller[97832]: 2025-12-11T14:31:19Z|00116|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:31:19 compute-0 nova_compute[189440]: 2025-12-11 14:31:19.541 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.207 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.273 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.391 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.392 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.393 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:21 compute-0 nova_compute[189440]: 2025-12-11 14:31:21.393 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:31:21 compute-0 podman[253839]: 2025-12-11 14:31:21.578880837 +0000 UTC m=+0.164056809 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.390 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.497 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.501 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.599 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.607 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.687 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.689 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:31:23 compute-0 nova_compute[189440]: 2025-12-11 14:31:23.785 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:31:24 compute-0 nova_compute[189440]: 2025-12-11 14:31:24.215 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:31:24 compute-0 nova_compute[189440]: 2025-12-11 14:31:24.217 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5015MB free_disk=72.26886367797852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:31:24 compute-0 nova_compute[189440]: 2025-12-11 14:31:24.217 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:31:24 compute-0 nova_compute[189440]: 2025-12-11 14:31:24.218 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:31:25 compute-0 podman[253878]: 2025-12-11 14:31:25.493176239 +0000 UTC m=+0.092853488 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:31:25 compute-0 podman[253877]: 2025-12-11 14:31:25.50250142 +0000 UTC m=+0.107232428 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, version=9.6)
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.013 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.014 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.014 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.014 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.080 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.211 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.276 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:26 compute-0 nova_compute[189440]: 2025-12-11 14:31:26.663 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:31:29 compute-0 nova_compute[189440]: 2025-12-11 14:31:29.351 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:31:29 compute-0 nova_compute[189440]: 2025-12-11 14:31:29.352 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:31:29 compute-0 podman[203650]: time="2025-12-11T14:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:31:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:31:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5262 "" "Go-http-client/1.1"
Dec 11 14:31:30 compute-0 nova_compute[189440]: 2025-12-11 14:31:30.352 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:31 compute-0 nova_compute[189440]: 2025-12-11 14:31:31.045 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:31 compute-0 nova_compute[189440]: 2025-12-11 14:31:31.046 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:31:31 compute-0 nova_compute[189440]: 2025-12-11 14:31:31.214 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:31 compute-0 nova_compute[189440]: 2025-12-11 14:31:31.278 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: ERROR   14:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: ERROR   14:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: ERROR   14:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: ERROR   14:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: ERROR   14:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:31:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:31:36 compute-0 nova_compute[189440]: 2025-12-11 14:31:36.216 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:36 compute-0 nova_compute[189440]: 2025-12-11 14:31:36.281 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:38 compute-0 podman[253917]: 2025-12-11 14:31:38.524692315 +0000 UTC m=+0.113412425 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:31:38 compute-0 podman[253918]: 2025-12-11 14:31:38.54090294 +0000 UTC m=+0.123491335 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:31:41 compute-0 nova_compute[189440]: 2025-12-11 14:31:41.219 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:41 compute-0 nova_compute[189440]: 2025-12-11 14:31:41.283 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.992 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.993 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd010d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.003 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b112e8a-c27d-4b2e-91fc-81552a0cd4ee', 'name': 'tempest-AttachInterfacesUnderV243Test-server-29252937', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b0f7c7a5f01c4c7a9fd2fa3668dcd463', 'user_id': 'a714564f83e74b39aa33b964e9913421', 'hostId': '5dbf343690864d1983c881e8bc082672162e288a5198d8460c1b4743', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.008 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64b46b2-b462-4f18-99a0-33cce11b70c3', 'name': 'tempest-ServerAddressesTestJSON-server-1930571022', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16cfe265641045f6adca23a64917736e', 'user_id': '719b5c4df50d474091f6f471803c8a13', 'hostId': '2fcddfdd3b298ab69316782a145f6113cf5f677ad9bc894793473b66', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.008 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:31:43.009219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.014 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.019 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.020 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:31:43.020842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.049 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/cpu volume: 36480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.080 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/cpu volume: 37560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:31:43.081847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.098 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.099 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.121 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.121 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:31:43.123900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.125 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.125 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.126 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.127 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/memory.usage volume: 46.4921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.127 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/memory.usage volume: 41.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:31:43.127016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.129 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.129 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:31:43.129344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.130 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.131 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.132 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.132 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:31:43.131705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.133 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.134 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.135 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.135 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.136 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.136 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.137 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.137 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.137 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.137 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.138 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:31:43.133727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:31:43.135491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:31:43.137156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:31:43.139733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.192 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 509451213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.193 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 51551775 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.248 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 715818456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.250 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 141083317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.251 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.251 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.251 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.251 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.251 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.252 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.252 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.252 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:31:43.251589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.252 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.252 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.253 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.254 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.254 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.254 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.254 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.255 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.255 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.255 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 73060352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.255 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.255 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 73019392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.256 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.257 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 4383891649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.257 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.257 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 10586132488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.257 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.258 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:31:43.253283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.260 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.260 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.260 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.260 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.261 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:31:43.255107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.264 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.266 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:31:43.256876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.267 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.267 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.268 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:31:43.258461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.270 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.270 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.270 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.270 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:31:43.259648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:31:43.261279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.271 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.272 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:31:43.262928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:31:43.264297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:31:43.265755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:31:43.267059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:31:43.268314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:31:43.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:31:43.269628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:31:44 compute-0 podman[253962]: 2025-12-11 14:31:44.769299411 +0000 UTC m=+0.085393690 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:31:44 compute-0 podman[253965]: 2025-12-11 14:31:44.776439611 +0000 UTC m=+0.079407668 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 11 14:31:44 compute-0 podman[253963]: 2025-12-11 14:31:44.783213222 +0000 UTC m=+0.089057347 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 11 14:31:44 compute-0 podman[253964]: 2025-12-11 14:31:44.789645644 +0000 UTC m=+0.091688019 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 14:31:46 compute-0 nova_compute[189440]: 2025-12-11 14:31:46.221 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:46 compute-0 nova_compute[189440]: 2025-12-11 14:31:46.285 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:51 compute-0 nova_compute[189440]: 2025-12-11 14:31:51.223 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:51 compute-0 nova_compute[189440]: 2025-12-11 14:31:51.288 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:52 compute-0 podman[254040]: 2025-12-11 14:31:52.563300173 +0000 UTC m=+0.153223491 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 11 14:31:55 compute-0 ovn_controller[97832]: 2025-12-11T14:31:55Z|00117|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 11 14:31:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 11 14:31:55 compute-0 podman[254066]: 2025-12-11 14:31:55.734122934 +0000 UTC m=+0.099078166 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:31:55 compute-0 podman[254065]: 2025-12-11 14:31:55.736067 +0000 UTC m=+0.105185861 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc.)
Dec 11 14:31:56 compute-0 nova_compute[189440]: 2025-12-11 14:31:56.227 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:56 compute-0 nova_compute[189440]: 2025-12-11 14:31:56.291 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:31:59 compute-0 podman[203650]: time="2025-12-11T14:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:31:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:31:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5273 "" "Go-http-client/1.1"
Dec 11 14:32:01 compute-0 nova_compute[189440]: 2025-12-11 14:32:01.229 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:32:01.285 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:32:01 compute-0 nova_compute[189440]: 2025-12-11 14:32:01.287 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:01 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:32:01.288 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:32:01 compute-0 nova_compute[189440]: 2025-12-11 14:32:01.294 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: ERROR   14:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: ERROR   14:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: ERROR   14:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: ERROR   14:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: ERROR   14:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:32:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:32:02 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:32:02.291 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:32:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:32:04.114 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:32:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:32:04.116 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:32:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:32:04.117 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:32:06 compute-0 nova_compute[189440]: 2025-12-11 14:32:06.235 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:06 compute-0 nova_compute[189440]: 2025-12-11 14:32:06.296 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:09 compute-0 nova_compute[189440]: 2025-12-11 14:32:09.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:09 compute-0 podman[254108]: 2025-12-11 14:32:09.474611029 +0000 UTC m=+0.072006953 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 11 14:32:09 compute-0 podman[254107]: 2025-12-11 14:32:09.506397333 +0000 UTC m=+0.110324241 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 11 14:32:11 compute-0 nova_compute[189440]: 2025-12-11 14:32:11.237 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:11 compute-0 nova_compute[189440]: 2025-12-11 14:32:11.300 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:13 compute-0 nova_compute[189440]: 2025-12-11 14:32:13.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:13 compute-0 nova_compute[189440]: 2025-12-11 14:32:13.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:32:14 compute-0 nova_compute[189440]: 2025-12-11 14:32:14.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:15 compute-0 podman[254150]: 2025-12-11 14:32:15.533744233 +0000 UTC m=+0.109534463 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:32:15 compute-0 podman[254153]: 2025-12-11 14:32:15.546003775 +0000 UTC m=+0.101231486 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 11 14:32:15 compute-0 podman[254151]: 2025-12-11 14:32:15.571009669 +0000 UTC m=+0.135712055 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Dec 11 14:32:15 compute-0 podman[254152]: 2025-12-11 14:32:15.584877299 +0000 UTC m=+0.152788241 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:32:16 compute-0 nova_compute[189440]: 2025-12-11 14:32:16.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:16 compute-0 nova_compute[189440]: 2025-12-11 14:32:16.240 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:16 compute-0 nova_compute[189440]: 2025-12-11 14:32:16.303 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:18 compute-0 nova_compute[189440]: 2025-12-11 14:32:18.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:18 compute-0 nova_compute[189440]: 2025-12-11 14:32:18.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:32:18 compute-0 nova_compute[189440]: 2025-12-11 14:32:18.264 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 11 14:32:20 compute-0 nova_compute[189440]: 2025-12-11 14:32:20.260 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:21 compute-0 nova_compute[189440]: 2025-12-11 14:32:21.244 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:21 compute-0 nova_compute[189440]: 2025-12-11 14:32:21.305 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.280 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.281 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.282 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.283 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.376 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.473 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.474 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.594 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.601 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.691 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.692 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:32:22 compute-0 nova_compute[189440]: 2025-12-11 14:32:22.755 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.137 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.138 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4979MB free_disk=72.26886367797852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.138 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.139 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.258 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.258 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.258 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.258 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.343 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.560 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.562 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:32:23 compute-0 nova_compute[189440]: 2025-12-11 14:32:23.562 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:32:23 compute-0 podman[254240]: 2025-12-11 14:32:23.57376181 +0000 UTC m=+0.163378453 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 11 14:32:25 compute-0 nova_compute[189440]: 2025-12-11 14:32:25.563 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:25 compute-0 nova_compute[189440]: 2025-12-11 14:32:25.564 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:26 compute-0 nova_compute[189440]: 2025-12-11 14:32:26.245 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:26 compute-0 nova_compute[189440]: 2025-12-11 14:32:26.308 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:26 compute-0 podman[254266]: 2025-12-11 14:32:26.525307211 +0000 UTC m=+0.093039832 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:32:26 compute-0 podman[254265]: 2025-12-11 14:32:26.531764954 +0000 UTC m=+0.115562786 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Dec 11 14:32:29 compute-0 podman[203650]: time="2025-12-11T14:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:32:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:32:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5267 "" "Go-http-client/1.1"
Dec 11 14:32:31 compute-0 nova_compute[189440]: 2025-12-11 14:32:31.248 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:31 compute-0 nova_compute[189440]: 2025-12-11 14:32:31.311 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: ERROR   14:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: ERROR   14:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: ERROR   14:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: ERROR   14:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: ERROR   14:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:32:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:32:36 compute-0 nova_compute[189440]: 2025-12-11 14:32:36.251 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:36 compute-0 nova_compute[189440]: 2025-12-11 14:32:36.314 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.235 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.237 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.237 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.238 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.239 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.240 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.261 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.284 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.285 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Image id 64e29581-a774-4784-b0cb-b4428b3222f4 yields fingerprint b9398531008bd76fff67b1480b858b505311524e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.286 189444 INFO nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] image 64e29581-a774-4784-b0cb-b4428b3222f4 at (/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e): checking#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.287 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] image 64e29581-a774-4784-b0cb-b4428b3222f4 at (/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.291 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.293 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] f64b46b2-b462-4f18-99a0-33cce11b70c3 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.294 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] f64b46b2-b462-4f18-99a0-33cce11b70c3 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.295 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.366 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.367 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 is backed by b9398531008bd76fff67b1480b858b505311524e _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.368 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.369 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.370 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.442 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.444 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee is backed by b9398531008bd76fff67b1480b858b505311524e _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.445 189444 WARNING nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.446 189444 WARNING nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.447 189444 INFO nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Active base files: /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.448 189444 INFO nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Removable base files: /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031 /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.449 189444 INFO nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/80c1cfa5fd1b466ec1a17bf63cbd02b7dd7f5031#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.450 189444 INFO nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/2d7c65d8bb86e8121bce6ece4bef12d64fb67e72#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.451 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.451 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.452 189444 DEBUG nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec 11 14:32:37 compute-0 nova_compute[189440]: 2025-12-11 14:32:37.453 189444 INFO nova.virt.libvirt.imagecache [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec 11 14:32:40 compute-0 podman[254312]: 2025-12-11 14:32:40.503245325 +0000 UTC m=+0.085732898 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 11 14:32:40 compute-0 podman[254311]: 2025-12-11 14:32:40.518952328 +0000 UTC m=+0.106367739 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 11 14:32:41 compute-0 nova_compute[189440]: 2025-12-11 14:32:41.254 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:41 compute-0 nova_compute[189440]: 2025-12-11 14:32:41.317 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:46 compute-0 nova_compute[189440]: 2025-12-11 14:32:46.255 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:46 compute-0 nova_compute[189440]: 2025-12-11 14:32:46.319 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:46 compute-0 podman[254353]: 2025-12-11 14:32:46.502410055 +0000 UTC m=+0.104736040 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 11 14:32:46 compute-0 podman[254355]: 2025-12-11 14:32:46.508440358 +0000 UTC m=+0.098008540 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 11 14:32:46 compute-0 podman[254361]: 2025-12-11 14:32:46.511592982 +0000 UTC m=+0.095332986 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251210, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 11 14:32:46 compute-0 podman[254354]: 2025-12-11 14:32:46.528110515 +0000 UTC m=+0.111417249 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-type=git, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 11 14:32:51 compute-0 nova_compute[189440]: 2025-12-11 14:32:51.257 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:51 compute-0 nova_compute[189440]: 2025-12-11 14:32:51.321 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:54 compute-0 podman[254428]: 2025-12-11 14:32:54.576052193 +0000 UTC m=+0.169788625 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 11 14:32:56 compute-0 nova_compute[189440]: 2025-12-11 14:32:56.259 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:56 compute-0 nova_compute[189440]: 2025-12-11 14:32:56.324 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:32:57 compute-0 podman[254455]: 2025-12-11 14:32:57.512722634 +0000 UTC m=+0.095771096 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:32:57 compute-0 podman[254454]: 2025-12-11 14:32:57.546979978 +0000 UTC m=+0.136323781 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:32:59 compute-0 podman[203650]: time="2025-12-11T14:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:32:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:32:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5267 "" "Go-http-client/1.1"
Dec 11 14:33:01 compute-0 nova_compute[189440]: 2025-12-11 14:33:01.261 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:01 compute-0 nova_compute[189440]: 2025-12-11 14:33:01.326 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: ERROR   14:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: ERROR   14:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: ERROR   14:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: ERROR   14:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: ERROR   14:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:33:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:33:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:04.115 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:04.116 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:04.117 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:06 compute-0 nova_compute[189440]: 2025-12-11 14:33:06.264 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:06 compute-0 nova_compute[189440]: 2025-12-11 14:33:06.329 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:08 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:08.045 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:33:08 compute-0 nova_compute[189440]: 2025-12-11 14:33:08.046 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:08 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:08.048 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:33:08 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:08.051 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:11 compute-0 nova_compute[189440]: 2025-12-11 14:33:11.266 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:11 compute-0 nova_compute[189440]: 2025-12-11 14:33:11.333 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:11 compute-0 nova_compute[189440]: 2025-12-11 14:33:11.453 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:11 compute-0 podman[254501]: 2025-12-11 14:33:11.486183867 +0000 UTC m=+0.082631624 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:33:11 compute-0 podman[254500]: 2025-12-11 14:33:11.499032532 +0000 UTC m=+0.096552704 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 11 14:33:14 compute-0 nova_compute[189440]: 2025-12-11 14:33:14.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:14 compute-0 nova_compute[189440]: 2025-12-11 14:33:14.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.237 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.238 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.268 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.336 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.381 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.412 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.413 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Triggering sync for uuid 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.414 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "f64b46b2-b462-4f18-99a0-33cce11b70c3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.415 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.416 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.418 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.454 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:16 compute-0 nova_compute[189440]: 2025-12-11 14:33:16.455 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "f64b46b2-b462-4f18-99a0-33cce11b70c3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:17 compute-0 podman[254540]: 2025-12-11 14:33:17.517668569 +0000 UTC m=+0.111337006 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 11 14:33:17 compute-0 podman[254549]: 2025-12-11 14:33:17.53036202 +0000 UTC m=+0.095352957 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 11 14:33:17 compute-0 podman[254541]: 2025-12-11 14:33:17.531409585 +0000 UTC m=+0.106023920 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible)
Dec 11 14:33:17 compute-0 podman[254542]: 2025-12-11 14:33:17.536296281 +0000 UTC m=+0.114674296 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.272 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.273 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.274 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.667 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.668 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.669 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:33:18 compute-0 nova_compute[189440]: 2025-12-11 14:33:18.670 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:33:20 compute-0 nova_compute[189440]: 2025-12-11 14:33:20.720 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updating instance_info_cache with network_info: [{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:33:20 compute-0 nova_compute[189440]: 2025-12-11 14:33:20.739 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:33:20 compute-0 nova_compute[189440]: 2025-12-11 14:33:20.740 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:33:20 compute-0 nova_compute[189440]: 2025-12-11 14:33:20.740 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:20 compute-0 nova_compute[189440]: 2025-12-11 14:33:20.741 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 11 14:33:20 compute-0 nova_compute[189440]: 2025-12-11 14:33:20.753 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 11 14:33:21 compute-0 nova_compute[189440]: 2025-12-11 14:33:21.271 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:21 compute-0 nova_compute[189440]: 2025-12-11 14:33:21.339 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.249 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.249 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.284 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.286 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.287 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.288 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.397 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.469 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.471 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.545 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.554 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.625 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.626 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:22 compute-0 nova_compute[189440]: 2025-12-11 14:33:22.725 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.167 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.168 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4941MB free_disk=72.26885986328125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.168 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.169 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.249 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.249 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.249 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.250 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.266 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing inventories for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.282 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating ProviderTree inventory for provider 1bda6308-729f-4919-a8ba-89570b8721fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.283 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Updating inventory in ProviderTree for provider 1bda6308-729f-4919-a8ba-89570b8721fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.299 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing aggregate associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.328 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Refreshing trait associations for resource provider 1bda6308-729f-4919-a8ba-89570b8721fc, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.401 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.420 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.423 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.423 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:23 compute-0 nova_compute[189440]: 2025-12-11 14:33:23.424 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:25 compute-0 podman[254626]: 2025-12-11 14:33:25.526356042 +0000 UTC m=+0.119015649 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 11 14:33:26 compute-0 nova_compute[189440]: 2025-12-11 14:33:26.273 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:26 compute-0 nova_compute[189440]: 2025-12-11 14:33:26.341 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:26 compute-0 nova_compute[189440]: 2025-12-11 14:33:26.425 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:26 compute-0 nova_compute[189440]: 2025-12-11 14:33:26.425 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:28 compute-0 podman[254651]: 2025-12-11 14:33:28.522185305 +0000 UTC m=+0.108653873 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:33:28 compute-0 podman[254650]: 2025-12-11 14:33:28.526076607 +0000 UTC m=+0.114965662 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6)
Dec 11 14:33:29 compute-0 podman[203650]: time="2025-12-11T14:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:33:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:33:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5268 "" "Go-http-client/1.1"
Dec 11 14:33:30 compute-0 nova_compute[189440]: 2025-12-11 14:33:30.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:31 compute-0 nova_compute[189440]: 2025-12-11 14:33:31.275 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:31 compute-0 nova_compute[189440]: 2025-12-11 14:33:31.343 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: ERROR   14:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: ERROR   14:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: ERROR   14:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: ERROR   14:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: ERROR   14:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:33:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.196 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.197 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.217 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.312 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.313 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.322 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.323 189444 INFO nova.compute.claims [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.527 189444 DEBUG nova.compute.provider_tree [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.544 189444 DEBUG nova.scheduler.client.report [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.574 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.575 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.636 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.637 189444 DEBUG nova.network.neutron [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.659 189444 INFO nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.684 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.791 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.793 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.795 189444 INFO nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Creating image(s)#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.796 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "/var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.797 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "/var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.799 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "/var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.830 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.922 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.923 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "b9398531008bd76fff67b1480b858b505311524e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.924 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:32 compute-0 nova_compute[189440]: 2025-12-11 14:33:32.935 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.034 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.036 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.115 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e,backing_fmt=raw /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk 1073741824" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.117 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "b9398531008bd76fff67b1480b858b505311524e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.119 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.211 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9398531008bd76fff67b1480b858b505311524e --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.212 189444 DEBUG nova.virt.disk.api [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Checking if we can resize image /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.213 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.280 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.281 189444 DEBUG nova.virt.disk.api [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Cannot resize image /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.282 189444 DEBUG nova.objects.instance [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lazy-loading 'migration_context' on Instance uuid 62bfa43b-7258-445f-b9e2-f93556312882 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.301 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.302 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Ensure instance console log exists: /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.302 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.303 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.303 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:33 compute-0 nova_compute[189440]: 2025-12-11 14:33:33.364 189444 DEBUG nova.policy [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '66ecbf8a280a44f5b04c4f801fa62c4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9918c3b83e4146fb8f595fd50ea637fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 11 14:33:35 compute-0 nova_compute[189440]: 2025-12-11 14:33:35.023 189444 DEBUG nova.network.neutron [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Successfully created port: 5867872c-9fad-4f6d-bbe9-964f15daf5ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 11 14:33:36 compute-0 nova_compute[189440]: 2025-12-11 14:33:36.278 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:36 compute-0 nova_compute[189440]: 2025-12-11 14:33:36.346 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:36 compute-0 nova_compute[189440]: 2025-12-11 14:33:36.669 189444 DEBUG nova.network.neutron [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Successfully updated port: 5867872c-9fad-4f6d-bbe9-964f15daf5ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 11 14:33:36 compute-0 nova_compute[189440]: 2025-12-11 14:33:36.685 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:33:36 compute-0 nova_compute[189440]: 2025-12-11 14:33:36.686 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquired lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:33:36 compute-0 nova_compute[189440]: 2025-12-11 14:33:36.686 189444 DEBUG nova.network.neutron [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 11 14:33:37 compute-0 nova_compute[189440]: 2025-12-11 14:33:37.215 189444 DEBUG nova.network.neutron [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 11 14:33:37 compute-0 nova_compute[189440]: 2025-12-11 14:33:37.816 189444 DEBUG nova.compute.manager [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-changed-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:33:37 compute-0 nova_compute[189440]: 2025-12-11 14:33:37.817 189444 DEBUG nova.compute.manager [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Refreshing instance network info cache due to event network-changed-5867872c-9fad-4f6d-bbe9-964f15daf5ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:33:37 compute-0 nova_compute[189440]: 2025-12-11 14:33:37.817 189444 DEBUG oslo_concurrency.lockutils [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.613 189444 DEBUG nova.network.neutron [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Updating instance_info_cache with network_info: [{"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.638 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Releasing lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.639 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Instance network_info: |[{"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.640 189444 DEBUG oslo_concurrency.lockutils [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.640 189444 DEBUG nova.network.neutron [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Refreshing network info cache for port 5867872c-9fad-4f6d-bbe9-964f15daf5ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.644 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Start _get_guest_xml network_info=[{"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'device_type': 'disk', 'image_id': '64e29581-a774-4784-b0cb-b4428b3222f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.654 189444 WARNING nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.664 189444 DEBUG nova.virt.libvirt.host [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.665 189444 DEBUG nova.virt.libvirt.host [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.671 189444 DEBUG nova.virt.libvirt.host [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.672 189444 DEBUG nova.virt.libvirt.host [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.673 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.673 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-11T14:25:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='639c6f85-2c0f-4003-98b6-94c63eeb9fc7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-11T14:25:25Z,direct_url=<?>,disk_format='qcow2',id=64e29581-a774-4784-b0cb-b4428b3222f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9c30b62d3d094e1e8b410a2af9fd7d98',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-11T14:25:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.674 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.674 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.675 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.675 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.676 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.676 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.677 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.677 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.678 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.678 189444 DEBUG nova.virt.hardware [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.682 189444 DEBUG nova.virt.libvirt.vif [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1014638578',display_name='tempest-TestServerBasicOps-server-1014638578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1014638578',id=9,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7iyLoIHTWacJvoiouumlz6dlFkR5262yGsw865DcSUuDmeWwYsJQgYdwidpGvc0DIt6lJev8qlAifxnLSRhnk+65agwiuleoK2QPljsrWTbNmd08IEYLMA3e0FsQd0sA==',key_name='tempest-TestServerBasicOps-463530918',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9918c3b83e4146fb8f595fd50ea637fe',ramdisk_id='',reservation_id='r-p4xeylws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-179046709',owner_user_name='tempest-TestServerBasicOps-179046709-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66ecbf8a280a44f5b04c4f801fa62c4b',uuid=62bfa43b-7258-445f-b9e2-f93556312882,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.683 189444 DEBUG nova.network.os_vif_util [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Converting VIF {"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.684 189444 DEBUG nova.network.os_vif_util [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.685 189444 DEBUG nova.objects.instance [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 62bfa43b-7258-445f-b9e2-f93556312882 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.706 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] End _get_guest_xml xml=<domain type="kvm">
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <uuid>62bfa43b-7258-445f-b9e2-f93556312882</uuid>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <name>instance-00000009</name>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <memory>131072</memory>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <vcpu>1</vcpu>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <metadata>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:name>tempest-TestServerBasicOps-server-1014638578</nova:name>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:creationTime>2025-12-11 14:33:38</nova:creationTime>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:flavor name="m1.nano">
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:memory>128</nova:memory>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:disk>1</nova:disk>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:swap>0</nova:swap>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:ephemeral>0</nova:ephemeral>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:vcpus>1</nova:vcpus>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      </nova:flavor>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:owner>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:user uuid="66ecbf8a280a44f5b04c4f801fa62c4b">tempest-TestServerBasicOps-179046709-project-member</nova:user>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:project uuid="9918c3b83e4146fb8f595fd50ea637fe">tempest-TestServerBasicOps-179046709</nova:project>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      </nova:owner>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:root type="image" uuid="64e29581-a774-4784-b0cb-b4428b3222f4"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <nova:ports>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        <nova:port uuid="5867872c-9fad-4f6d-bbe9-964f15daf5ad">
Dec 11 14:33:38 compute-0 nova_compute[189440]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:        </nova:port>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      </nova:ports>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </nova:instance>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </metadata>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <sysinfo type="smbios">
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <system>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <entry name="manufacturer">RDO</entry>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <entry name="product">OpenStack Compute</entry>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <entry name="serial">62bfa43b-7258-445f-b9e2-f93556312882</entry>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <entry name="uuid">62bfa43b-7258-445f-b9e2-f93556312882</entry>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <entry name="family">Virtual Machine</entry>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </system>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </sysinfo>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <os>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <boot dev="hd"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <smbios mode="sysinfo"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </os>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <features>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <acpi/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <apic/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <vmcoreinfo/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </features>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <clock offset="utc">
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <timer name="pit" tickpolicy="delay"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <timer name="hpet" present="no"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </clock>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <cpu mode="host-model" match="exact">
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <topology sockets="1" cores="1" threads="1"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </cpu>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  <devices>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <disk type="file" device="disk">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <target dev="vda" bus="virtio"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <disk type="file" device="cdrom">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <driver name="qemu" type="raw" cache="none"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <source file="/var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.config"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <target dev="sda" bus="sata"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </disk>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <interface type="ethernet">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <mac address="fa:16:3e:8c:a7:42"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <driver name="vhost" rx_queue_size="512"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <mtu size="1442"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <target dev="tap5867872c-9f"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </interface>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <serial type="pty">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <log file="/var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/console.log" append="off"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </serial>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <video>
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <model type="virtio"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </video>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <input type="tablet" bus="usb"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <rng model="virtio">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <backend model="random">/dev/urandom</backend>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </rng>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="pci" model="pcie-root-port"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <controller type="usb" index="0"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    <memballoon model="virtio">
Dec 11 14:33:38 compute-0 nova_compute[189440]:      <stats period="10"/>
Dec 11 14:33:38 compute-0 nova_compute[189440]:    </memballoon>
Dec 11 14:33:38 compute-0 nova_compute[189440]:  </devices>
Dec 11 14:33:38 compute-0 nova_compute[189440]: </domain>
Dec 11 14:33:38 compute-0 nova_compute[189440]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.707 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Preparing to wait for external event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.708 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.708 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.708 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.709 189444 DEBUG nova.virt.libvirt.vif [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1014638578',display_name='tempest-TestServerBasicOps-server-1014638578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1014638578',id=9,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7iyLoIHTWacJvoiouumlz6dlFkR5262yGsw865DcSUuDmeWwYsJQgYdwidpGvc0DIt6lJev8qlAifxnLSRhnk+65agwiuleoK2QPljsrWTbNmd08IEYLMA3e0FsQd0sA==',key_name='tempest-TestServerBasicOps-463530918',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9918c3b83e4146fb8f595fd50ea637fe',ramdisk_id='',reservation_id='r-p4xeylws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-179046709',owner_user_name='tempest-TestServerBasicOps-179046709-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-11T14:33:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66ecbf8a280a44f5b04c4f801fa62c4b',uuid=62bfa43b-7258-445f-b9e2-f93556312882,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.710 189444 DEBUG nova.network.os_vif_util [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Converting VIF {"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.711 189444 DEBUG nova.network.os_vif_util [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.711 189444 DEBUG os_vif [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.712 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.712 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.713 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.718 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.719 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5867872c-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.720 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5867872c-9f, col_values=(('external_ids', {'iface-id': '5867872c-9fad-4f6d-bbe9-964f15daf5ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:a7:42', 'vm-uuid': '62bfa43b-7258-445f-b9e2-f93556312882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.723 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:38 compute-0 NetworkManager[56353]: <info>  [1765463618.7252] manager: (tap5867872c-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.726 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.737 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.738 189444 INFO os_vif [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f')#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.819 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.820 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.821 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] No VIF found with MAC fa:16:3e:8c:a7:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 11 14:33:38 compute-0 nova_compute[189440]: 2025-12-11 14:33:38.821 189444 INFO nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Using config drive#033[00m
Dec 11 14:33:39 compute-0 nova_compute[189440]: 2025-12-11 14:33:39.732 189444 INFO nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Creating config drive at /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.config#033[00m
Dec 11 14:33:39 compute-0 nova_compute[189440]: 2025-12-11 14:33:39.745 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp815oo6v8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:33:39 compute-0 nova_compute[189440]: 2025-12-11 14:33:39.890 189444 DEBUG oslo_concurrency.processutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp815oo6v8" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:33:39 compute-0 kernel: tap5867872c-9f: entered promiscuous mode
Dec 11 14:33:39 compute-0 ovn_controller[97832]: 2025-12-11T14:33:39Z|00118|binding|INFO|Claiming lport 5867872c-9fad-4f6d-bbe9-964f15daf5ad for this chassis.
Dec 11 14:33:39 compute-0 nova_compute[189440]: 2025-12-11 14:33:39.966 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:39 compute-0 ovn_controller[97832]: 2025-12-11T14:33:39Z|00119|binding|INFO|5867872c-9fad-4f6d-bbe9-964f15daf5ad: Claiming fa:16:3e:8c:a7:42 10.100.0.5
Dec 11 14:33:39 compute-0 NetworkManager[56353]: <info>  [1765463619.9689] manager: (tap5867872c-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec 11 14:33:39 compute-0 nova_compute[189440]: 2025-12-11 14:33:39.972 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:39 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:39.981 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:a7:42 10.100.0.5'], port_security=['fa:16:3e:8c:a7:42 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '62bfa43b-7258-445f-b9e2-f93556312882', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81c64238-e165-40c5-bca0-74045d48e1c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9918c3b83e4146fb8f595fd50ea637fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c33d7ee-1d39-4b8c-84e0-7040a3de1e70 82b35f94-5b40-4cda-9cd3-cce23d5c35a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd46ae0b-db0e-4f35-bd94-b7354698fe8f, chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=5867872c-9fad-4f6d-bbe9-964f15daf5ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:33:39 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:39.983 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 5867872c-9fad-4f6d-bbe9-964f15daf5ad in datapath 81c64238-e165-40c5-bca0-74045d48e1c2 bound to our chassis#033[00m
Dec 11 14:33:39 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:39.986 106686 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81c64238-e165-40c5-bca0-74045d48e1c2#033[00m
Dec 11 14:33:40 compute-0 systemd-udevd[254726]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.014 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[671e9c22-6610-4734-9806-d4484bb0da49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.015 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81c64238-e1 in ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.020 239832 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81c64238-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.020 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c14d30-a54f-41f0-be5e-5757819882d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.022 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.022 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[15be5767-ff7f-4e37-b4f4-31ffbe87c68f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_controller[97832]: 2025-12-11T14:33:40Z|00120|binding|INFO|Setting lport 5867872c-9fad-4f6d-bbe9-964f15daf5ad ovn-installed in OVS
Dec 11 14:33:40 compute-0 ovn_controller[97832]: 2025-12-11T14:33:40Z|00121|binding|INFO|Setting lport 5867872c-9fad-4f6d-bbe9-964f15daf5ad up in Southbound
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.029 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:40 compute-0 systemd-machined[155778]: New machine qemu-10-instance-00000009.
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.040 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3aec2d-9787-4e0d-8ac8-f84ed0c36246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000009.
Dec 11 14:33:40 compute-0 NetworkManager[56353]: <info>  [1765463620.0495] device (tap5867872c-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 14:33:40 compute-0 NetworkManager[56353]: <info>  [1765463620.0543] device (tap5867872c-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.075 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[7a54ff95-f148-4c35-a97d-0cb70f5a3f7a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.117 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[448a42b6-425e-4bbe-83e4-220a80f0338a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 systemd-udevd[254731]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.129 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0edb3d-f9ca-4b52-8ff4-9d44740bc300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 NetworkManager[56353]: <info>  [1765463620.1331] manager: (tap81c64238-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.171 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1b4f97-6f2e-4951-92cd-8730cfe31680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.176 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[f3df81df-1826-4470-9772-1ce2e08a502b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 NetworkManager[56353]: <info>  [1765463620.2091] device (tap81c64238-e0): carrier: link connected
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.214 239872 DEBUG oslo.privsep.daemon [-] privsep: reply[c3577e1f-e300-4254-8f5c-5a2b41d0691e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.240 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[df6e881f-1511-407c-99c4-02258566e304]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81c64238-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:3e:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568053, 'reachable_time': 25011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254759, 'error': None, 'target': 'ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.269 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[f32e1c1c-6beb-42ac-845e-8cbc8b98e357]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:3e4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568053, 'tstamp': 568053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254760, 'error': None, 'target': 'ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.295 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7ff643-33e5-4016-b627-ff6927f65cee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81c64238-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:3e:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568053, 'reachable_time': 25011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254761, 'error': None, 'target': 'ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.344 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0df72c-ab43-4242-bbaa-855153d94930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.428 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[b54915ad-645c-4f36-8525-e1267fd3598c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.430 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c64238-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.430 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.431 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c64238-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:40 compute-0 kernel: tap81c64238-e0: entered promiscuous mode
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.436 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.437 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81c64238-e0, col_values=(('external_ids', {'iface-id': '4dac7bbd-014f-4aab-8bd4-d988f2ca41c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:33:40 compute-0 NetworkManager[56353]: <info>  [1765463620.4383] manager: (tap81c64238-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec 11 14:33:40 compute-0 ovn_controller[97832]: 2025-12-11T14:33:40Z|00122|binding|INFO|Releasing lport 4dac7bbd-014f-4aab-8bd4-d988f2ca41c9 from this chassis (sb_readonly=0)
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.441 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.442 106686 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81c64238-e165-40c5-bca0-74045d48e1c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81c64238-e165-40c5-bca0-74045d48e1c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.444 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[a81ef7e5-8316-4fac-b0a2-d09793fdc448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.445 106686 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: global
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    log         /dev/log local0 debug
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    log-tag     haproxy-metadata-proxy-81c64238-e165-40c5-bca0-74045d48e1c2
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    user        root
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    group       root
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    maxconn     1024
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    pidfile     /var/lib/neutron/external/pids/81c64238-e165-40c5-bca0-74045d48e1c2.pid.haproxy
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    daemon
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: defaults
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    log global
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    mode http
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    option httplog
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    option dontlognull
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    option http-server-close
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    option forwardfor
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    retries                 3
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    timeout http-request    30s
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    timeout connect         30s
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    timeout client          32s
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    timeout server          32s
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    timeout http-keep-alive 30s
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: listen listener
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    bind 169.254.169.254:80
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    server metadata /var/lib/neutron/metadata_proxy
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]:    http-request add-header X-OVN-Network-ID 81c64238-e165-40c5-bca0-74045d48e1c2
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 11 14:33:40 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:33:40.446 106686 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2', 'env', 'PROCESS_TAG=haproxy-81c64238-e165-40c5-bca0-74045d48e1c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81c64238-e165-40c5-bca0-74045d48e1c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.455 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.484 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463620.4835706, 62bfa43b-7258-445f-b9e2-f93556312882 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.484 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] VM Started (Lifecycle Event)#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.513 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.521 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463620.4836903, 62bfa43b-7258-445f-b9e2-f93556312882 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.521 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] VM Paused (Lifecycle Event)#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.539 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.545 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.572 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:33:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 11 14:33:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.688 189444 DEBUG nova.compute.manager [req-2bce134f-f6fb-41f2-8907-4407d544890e req-b5c4fa86-b524-43bc-8a97-52f33d563a1b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.690 189444 DEBUG oslo_concurrency.lockutils [req-2bce134f-f6fb-41f2-8907-4407d544890e req-b5c4fa86-b524-43bc-8a97-52f33d563a1b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.691 189444 DEBUG oslo_concurrency.lockutils [req-2bce134f-f6fb-41f2-8907-4407d544890e req-b5c4fa86-b524-43bc-8a97-52f33d563a1b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.691 189444 DEBUG oslo_concurrency.lockutils [req-2bce134f-f6fb-41f2-8907-4407d544890e req-b5c4fa86-b524-43bc-8a97-52f33d563a1b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.692 189444 DEBUG nova.compute.manager [req-2bce134f-f6fb-41f2-8907-4407d544890e req-b5c4fa86-b524-43bc-8a97-52f33d563a1b a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Processing event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.693 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.697 189444 DEBUG nova.virt.driver [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] Emitting event <LifecycleEvent: 1765463620.6970706, 62bfa43b-7258-445f-b9e2-f93556312882 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.698 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] VM Resumed (Lifecycle Event)#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.700 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.706 189444 INFO nova.virt.libvirt.driver [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Instance spawned successfully.#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.708 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.726 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.736 189444 DEBUG nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.742 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.743 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.743 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.744 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.745 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.745 189444 DEBUG nova.virt.libvirt.driver [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.783 189444 INFO nova.compute.manager [None req-9dbc36e9-5c97-4204-9b9f-c6367bd4f09b - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.823 189444 INFO nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Took 8.03 seconds to spawn the instance on the hypervisor.#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.824 189444 DEBUG nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.887 189444 INFO nova.compute.manager [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Took 8.61 seconds to build instance.#033[00m
Dec 11 14:33:40 compute-0 nova_compute[189440]: 2025-12-11 14:33:40.905 189444 DEBUG oslo_concurrency.lockutils [None req-3f1c6390-99f6-4e0e-bc56-9b77e91c9528 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:40 compute-0 podman[254818]: 2025-12-11 14:33:40.905996929 +0000 UTC m=+0.060409776 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 11 14:33:41 compute-0 nova_compute[189440]: 2025-12-11 14:33:41.015 189444 DEBUG nova.network.neutron [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Updated VIF entry in instance network info cache for port 5867872c-9fad-4f6d-bbe9-964f15daf5ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:33:41 compute-0 nova_compute[189440]: 2025-12-11 14:33:41.016 189444 DEBUG nova.network.neutron [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Updating instance_info_cache with network_info: [{"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:33:41 compute-0 nova_compute[189440]: 2025-12-11 14:33:41.034 189444 DEBUG oslo_concurrency.lockutils [req-3fc410ea-1948-4c4e-a018-0279e18239db req-a4d69b76-9e70-4163-afc8-9b334a778f6e a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:33:41 compute-0 podman[254818]: 2025-12-11 14:33:41.072906005 +0000 UTC m=+0.227318742 container create 1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 11 14:33:41 compute-0 systemd[1]: Started libpod-conmon-1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963.scope.
Dec 11 14:33:41 compute-0 systemd[1]: Started libcrun container.
Dec 11 14:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dba00a14914b6438b869e79095cb5f97dadeb4ade8384ec1a89da6bd609d7d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 11 14:33:41 compute-0 podman[254818]: 2025-12-11 14:33:41.219439197 +0000 UTC m=+0.373851954 container init 1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 11 14:33:41 compute-0 podman[254818]: 2025-12-11 14:33:41.233987883 +0000 UTC m=+0.388400620 container start 1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 11 14:33:41 compute-0 nova_compute[189440]: 2025-12-11 14:33:41.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:33:41 compute-0 nova_compute[189440]: 2025-12-11 14:33:41.237 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 11 14:33:41 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [NOTICE]   (254837) : New worker (254839) forked
Dec 11 14:33:41 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [NOTICE]   (254837) : Loading success.
Dec 11 14:33:41 compute-0 nova_compute[189440]: 2025-12-11 14:33:41.279 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:42 compute-0 podman[254848]: 2025-12-11 14:33:42.504415399 +0000 UTC m=+0.093012201 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 11 14:33:42 compute-0 podman[254849]: 2025-12-11 14:33:42.537218737 +0000 UTC m=+0.113232380 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:33:42 compute-0 nova_compute[189440]: 2025-12-11 14:33:42.839 189444 DEBUG nova.compute.manager [req-9c9e172f-88d7-498d-a4ad-671bb4246d15 req-8c051df1-dae1-4861-88d4-f2c85454f008 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:33:42 compute-0 nova_compute[189440]: 2025-12-11 14:33:42.841 189444 DEBUG oslo_concurrency.lockutils [req-9c9e172f-88d7-498d-a4ad-671bb4246d15 req-8c051df1-dae1-4861-88d4-f2c85454f008 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:33:42 compute-0 nova_compute[189440]: 2025-12-11 14:33:42.842 189444 DEBUG oslo_concurrency.lockutils [req-9c9e172f-88d7-498d-a4ad-671bb4246d15 req-8c051df1-dae1-4861-88d4-f2c85454f008 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:33:42 compute-0 nova_compute[189440]: 2025-12-11 14:33:42.843 189444 DEBUG oslo_concurrency.lockutils [req-9c9e172f-88d7-498d-a4ad-671bb4246d15 req-8c051df1-dae1-4861-88d4-f2c85454f008 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:33:42 compute-0 nova_compute[189440]: 2025-12-11 14:33:42.843 189444 DEBUG nova.compute.manager [req-9c9e172f-88d7-498d-a4ad-671bb4246d15 req-8c051df1-dae1-4861-88d4-f2c85454f008 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] No waiting events found dispatching network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:33:42 compute-0 nova_compute[189440]: 2025-12-11 14:33:42.844 189444 WARNING nova.compute.manager [req-9c9e172f-88d7-498d-a4ad-671bb4246d15 req-8c051df1-dae1-4861-88d4-f2c85454f008 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received unexpected event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad for instance with vm_state active and task_state None.#033[00m
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.993 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.995 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:42.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.003 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b112e8a-c27d-4b2e-91fc-81552a0cd4ee', 'name': 'tempest-AttachInterfacesUnderV243Test-server-29252937', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b0f7c7a5f01c4c7a9fd2fa3668dcd463', 'user_id': 'a714564f83e74b39aa33b964e9913421', 'hostId': '5dbf343690864d1983c881e8bc082672162e288a5198d8460c1b4743', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.009 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64b46b2-b462-4f18-99a0-33cce11b70c3', 'name': 'tempest-ServerAddressesTestJSON-server-1930571022', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16cfe265641045f6adca23a64917736e', 'user_id': '719b5c4df50d474091f6f471803c8a13', 'hostId': '2fcddfdd3b298ab69316782a145f6113cf5f677ad9bc894793473b66', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dc81880>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.012 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 62bfa43b-7258-445f-b9e2-f93556312882 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.013 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/62bfa43b-7258-445f-b9e2-f93556312882 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}cccfdb98f7814d2104ef30522629f30f2e7025f3d377e4b2e1b0c401a523009e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.635 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1961 Content-Type: application/json Date: Thu, 11 Dec 2025 14:33:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ad193946-582b-41ba-a4c6-b1d7d61b4e74 x-openstack-request-id: req-ad193946-582b-41ba-a4c6-b1d7d61b4e74 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.635 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "62bfa43b-7258-445f-b9e2-f93556312882", "name": "tempest-TestServerBasicOps-server-1014638578", "status": "ACTIVE", "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "user_id": "66ecbf8a280a44f5b04c4f801fa62c4b", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "228a14ae8512bd0620d570734e19d478e0921e8f64465b99659b997c", "image": {"id": "64e29581-a774-4784-b0cb-b4428b3222f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/64e29581-a774-4784-b0cb-b4428b3222f4"}]}, "flavor": {"id": "639c6f85-2c0f-4003-98b6-94c63eeb9fc7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/639c6f85-2c0f-4003-98b6-94c63eeb9fc7"}]}, "created": "2025-12-11T14:33:30Z", "updated": "2025-12-11T14:33:40Z", "addresses": {"tempest-TestServerBasicOps-870097525-network": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8c:a7:42"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/62bfa43b-7258-445f-b9e2-f93556312882"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/62bfa43b-7258-445f-b9e2-f93556312882"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-463530918", "OS-SRV-USG:launched_at": "2025-12-11T14:33:40.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1924285554"}, {"name": "tempest-securitygroup--2123570456"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.635 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/62bfa43b-7258-445f-b9e2-f93556312882 used request id req-ad193946-582b-41ba-a4c6-b1d7d61b4e74 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.636 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62bfa43b-7258-445f-b9e2-f93556312882', 'name': 'tempest-TestServerBasicOps-server-1014638578', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9918c3b83e4146fb8f595fd50ea637fe', 'user_id': '66ecbf8a280a44f5b04c4f801fa62c4b', 'hostId': '228a14ae8512bd0620d570734e19d478e0921e8f64465b99659b997c', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.637 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:33:43.637992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.644 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.650 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.655 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 62bfa43b-7258-445f-b9e2-f93556312882 / tap5867872c-9f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.655 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.656 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:33:43.657355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.686 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/cpu volume: 38160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.718 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/cpu volume: 39270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 nova_compute[189440]: 2025-12-11 14:33:43.724 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.746 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/cpu volume: 2900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.747 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.748 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.749 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.750 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:33:43.749602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.765 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.765 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.784 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.785 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.798 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.799 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.800 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:33:43.801332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.802 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.802 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.802 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.803 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.803 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.804 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.804 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:33:43.804362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.804 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/memory.usage volume: 46.4921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.805 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/memory.usage volume: 41.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.805 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.805 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 62bfa43b-7258-445f-b9e2-f93556312882: ceilometer.compute.pollsters.NoVolumeException
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.806 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.806 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:33:43.807254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.807 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.808 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.808 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-11T14:33:43.810224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.810 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.811 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1014638578>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1014638578>]
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:33:43.812283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.812 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.813 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.813 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.814 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:33:43.815203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.815 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.816 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.816 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.817 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:33:43.818103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.818 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.818 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.819 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.820 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:33:43.820809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.821 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.821 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.822 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.822 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.822 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.823 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.824 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:33:43.824737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.880 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 509451213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.881 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 51551775 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.938 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 715818456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.939 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 141083317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.983 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.read.latency volume: 521261430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.984 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.read.latency volume: 1217829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:33:43.986766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.988 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.988 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.989 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.990 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.991 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.992 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:33:43.996069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.998 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:43.999 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.000 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.001 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.003 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.003 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.003 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.004 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.004 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 73060352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.005 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.005 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 73019392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.005 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.006 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.006 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:33:44.004479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:33:44.009538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.010 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 4383891649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.010 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.011 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 10586132488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.011 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.012 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.012 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.014 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:33:44.015201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.016 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.016 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.017 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.020 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:33:44.019542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.020 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.021 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.021 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.022 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.022 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.024 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:33:44.024212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.024 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.025 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.027 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-11T14:33:44.026748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.027 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1014638578>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1014638578>]
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:33:44.028928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.031 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:33:44.031325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.032 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.032 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:33:44.035453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.036 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.037 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:33:44.037488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.038 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.038 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.039 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.040 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:33:44.040416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.041 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.041 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.042 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.043 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:33:44.043282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.044 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.044 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.045 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.045 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.045 14 DEBUG ceilometer.compute.pollsters [-] 62bfa43b-7258-445f-b9e2-f93556312882/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:33:44.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:33:44 compute-0 NetworkManager[56353]: <info>  [1765463624.6107] manager: (patch-br-int-to-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec 11 14:33:44 compute-0 NetworkManager[56353]: <info>  [1765463624.6118] manager: (patch-provnet-6faac981-17dd-4b78-8b8f-046b8a4b3a94-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec 11 14:33:44 compute-0 nova_compute[189440]: 2025-12-11 14:33:44.610 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:44 compute-0 nova_compute[189440]: 2025-12-11 14:33:44.829 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:44 compute-0 ovn_controller[97832]: 2025-12-11T14:33:44Z|00123|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:33:44 compute-0 ovn_controller[97832]: 2025-12-11T14:33:44Z|00124|binding|INFO|Releasing lport 4dac7bbd-014f-4aab-8bd4-d988f2ca41c9 from this chassis (sb_readonly=0)
Dec 11 14:33:44 compute-0 ovn_controller[97832]: 2025-12-11T14:33:44Z|00125|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:33:44 compute-0 nova_compute[189440]: 2025-12-11 14:33:44.874 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:45 compute-0 nova_compute[189440]: 2025-12-11 14:33:45.130 189444 DEBUG nova.compute.manager [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-changed-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:33:45 compute-0 nova_compute[189440]: 2025-12-11 14:33:45.132 189444 DEBUG nova.compute.manager [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Refreshing instance network info cache due to event network-changed-5867872c-9fad-4f6d-bbe9-964f15daf5ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 11 14:33:45 compute-0 nova_compute[189440]: 2025-12-11 14:33:45.134 189444 DEBUG oslo_concurrency.lockutils [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:33:45 compute-0 nova_compute[189440]: 2025-12-11 14:33:45.135 189444 DEBUG oslo_concurrency.lockutils [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquired lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:33:45 compute-0 nova_compute[189440]: 2025-12-11 14:33:45.136 189444 DEBUG nova.network.neutron [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Refreshing network info cache for port 5867872c-9fad-4f6d-bbe9-964f15daf5ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 11 14:33:46 compute-0 nova_compute[189440]: 2025-12-11 14:33:46.282 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:46 compute-0 nova_compute[189440]: 2025-12-11 14:33:46.820 189444 DEBUG nova.network.neutron [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Updated VIF entry in instance network info cache for port 5867872c-9fad-4f6d-bbe9-964f15daf5ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 11 14:33:46 compute-0 nova_compute[189440]: 2025-12-11 14:33:46.821 189444 DEBUG nova.network.neutron [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Updating instance_info_cache with network_info: [{"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:33:46 compute-0 nova_compute[189440]: 2025-12-11 14:33:46.838 189444 DEBUG oslo_concurrency.lockutils [req-c83af57e-f703-46dd-bc2e-f876544fe462 req-698cc70f-7281-4b76-92b3-83be56ec20da a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Releasing lock "refresh_cache-62bfa43b-7258-445f-b9e2-f93556312882" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:33:48 compute-0 podman[254902]: 2025-12-11 14:33:48.502230721 +0000 UTC m=+0.078549078 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 11 14:33:48 compute-0 podman[254895]: 2025-12-11 14:33:48.502405975 +0000 UTC m=+0.083527216 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 11 14:33:48 compute-0 podman[254893]: 2025-12-11 14:33:48.517590476 +0000 UTC m=+0.113650182 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 11 14:33:48 compute-0 podman[254894]: 2025-12-11 14:33:48.522298047 +0000 UTC m=+0.110442635 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release-0.7.12=, version=9.4, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30)
Dec 11 14:33:48 compute-0 nova_compute[189440]: 2025-12-11 14:33:48.729 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:51 compute-0 nova_compute[189440]: 2025-12-11 14:33:51.285 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:53 compute-0 nova_compute[189440]: 2025-12-11 14:33:53.733 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:56 compute-0 nova_compute[189440]: 2025-12-11 14:33:56.288 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:56 compute-0 podman[254966]: 2025-12-11 14:33:56.578940748 +0000 UTC m=+0.163009644 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 11 14:33:58 compute-0 nova_compute[189440]: 2025-12-11 14:33:58.738 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:33:59 compute-0 podman[254991]: 2025-12-11 14:33:59.5162533 +0000 UTC m=+0.102279981 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible)
Dec 11 14:33:59 compute-0 podman[254992]: 2025-12-11 14:33:59.537801742 +0000 UTC m=+0.125779429 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 11 14:33:59 compute-0 podman[203650]: time="2025-12-11T14:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:33:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:33:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5732 "" "Go-http-client/1.1"
Dec 11 14:34:01 compute-0 nova_compute[189440]: 2025-12-11 14:34:01.291 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: ERROR   14:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: ERROR   14:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: ERROR   14:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: ERROR   14:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: ERROR   14:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:34:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:34:03 compute-0 nova_compute[189440]: 2025-12-11 14:34:03.742 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:04.116 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:04.118 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:04.119 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:06 compute-0 nova_compute[189440]: 2025-12-11 14:34:06.294 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:08 compute-0 nova_compute[189440]: 2025-12-11 14:34:08.748 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:11 compute-0 nova_compute[189440]: 2025-12-11 14:34:11.257 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:11 compute-0 nova_compute[189440]: 2025-12-11 14:34:11.297 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:13 compute-0 ovn_controller[97832]: 2025-12-11T14:34:13Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:a7:42 10.100.0.5
Dec 11 14:34:13 compute-0 ovn_controller[97832]: 2025-12-11T14:34:13Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:a7:42 10.100.0.5
Dec 11 14:34:13 compute-0 podman[255043]: 2025-12-11 14:34:13.49411804 +0000 UTC m=+0.088010802 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:34:13 compute-0 podman[255042]: 2025-12-11 14:34:13.522012013 +0000 UTC m=+0.120748290 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3)
Dec 11 14:34:13 compute-0 nova_compute[189440]: 2025-12-11 14:34:13.753 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:14 compute-0 nova_compute[189440]: 2025-12-11 14:34:14.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:14 compute-0 nova_compute[189440]: 2025-12-11 14:34:14.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:34:14 compute-0 ovn_controller[97832]: 2025-12-11T14:34:14Z|00126|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 11 14:34:16 compute-0 nova_compute[189440]: 2025-12-11 14:34:16.300 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:17 compute-0 nova_compute[189440]: 2025-12-11 14:34:17.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:17 compute-0 nova_compute[189440]: 2025-12-11 14:34:17.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:18 compute-0 nova_compute[189440]: 2025-12-11 14:34:18.757 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:19 compute-0 podman[255084]: 2025-12-11 14:34:19.510729904 +0000 UTC m=+0.088702398 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 14:34:19 compute-0 podman[255085]: 2025-12-11 14:34:19.539030938 +0000 UTC m=+0.113642692 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543)
Dec 11 14:34:19 compute-0 podman[255086]: 2025-12-11 14:34:19.555200321 +0000 UTC m=+0.124925959 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 11 14:34:19 compute-0 podman[255087]: 2025-12-11 14:34:19.571582701 +0000 UTC m=+0.133490093 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true)
Dec 11 14:34:20 compute-0 nova_compute[189440]: 2025-12-11 14:34:20.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:20 compute-0 nova_compute[189440]: 2025-12-11 14:34:20.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:34:21 compute-0 nova_compute[189440]: 2025-12-11 14:34:21.305 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:22 compute-0 nova_compute[189440]: 2025-12-11 14:34:22.493 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:34:22 compute-0 nova_compute[189440]: 2025-12-11 14:34:22.494 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:34:22 compute-0 nova_compute[189440]: 2025-12-11 14:34:22.494 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:34:23 compute-0 nova_compute[189440]: 2025-12-11 14:34:23.761 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:26 compute-0 nova_compute[189440]: 2025-12-11 14:34:26.308 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.255 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updating instance_info_cache with network_info: [{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.274 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.275 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.276 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.277 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.277 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.318 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.319 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.319 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.320 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.422 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.489 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.491 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:34:27 compute-0 podman[255155]: 2025-12-11 14:34:27.527062204 +0000 UTC m=+0.128183708 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.558 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.565 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.631 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.633 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.704 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.716 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.784 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.785 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:34:27 compute-0 nova_compute[189440]: 2025-12-11 14:34:27.851 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.338 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.340 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4765MB free_disk=72.24021530151367GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.340 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.341 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.564 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.565 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.565 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 62bfa43b-7258-445f-b9e2-f93556312882 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.565 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.566 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.688 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.709 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.727 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.728 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:28 compute-0 nova_compute[189440]: 2025-12-11 14:34:28.764 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:29 compute-0 nova_compute[189440]: 2025-12-11 14:34:29.723 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:34:29 compute-0 podman[203650]: time="2025-12-11T14:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:34:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec 11 14:34:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5738 "" "Go-http-client/1.1"
Dec 11 14:34:30 compute-0 podman[255198]: 2025-12-11 14:34:30.502117814 +0000 UTC m=+0.075284430 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 11 14:34:30 compute-0 podman[255197]: 2025-12-11 14:34:30.563301537 +0000 UTC m=+0.135037049 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, version=9.6)
Dec 11 14:34:31 compute-0 nova_compute[189440]: 2025-12-11 14:34:31.310 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: ERROR   14:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: ERROR   14:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: ERROR   14:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: ERROR   14:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: ERROR   14:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:34:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:34:33 compute-0 nova_compute[189440]: 2025-12-11 14:34:33.767 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:36 compute-0 nova_compute[189440]: 2025-12-11 14:34:36.314 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:38 compute-0 nova_compute[189440]: 2025-12-11 14:34:38.770 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:41 compute-0 nova_compute[189440]: 2025-12-11 14:34:41.317 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:43 compute-0 nova_compute[189440]: 2025-12-11 14:34:43.774 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:44 compute-0 podman[255242]: 2025-12-11 14:34:44.531612485 +0000 UTC m=+0.121859956 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:34:44 compute-0 podman[255241]: 2025-12-11 14:34:44.533219933 +0000 UTC m=+0.134149648 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 11 14:34:46 compute-0 nova_compute[189440]: 2025-12-11 14:34:46.319 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:48 compute-0 nova_compute[189440]: 2025-12-11 14:34:48.778 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:49.739 106794 DEBUG eventlet.wsgi.server [-] (106794) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:49.742 106794 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: Accept: */*#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: Connection: close#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: Content-Type: text/plain#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: Host: 169.254.169.254#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: User-Agent: curl/7.84.0#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: X-Forwarded-For: 10.100.0.5#015
Dec 11 14:34:49 compute-0 ovn_metadata_agent[106681]: X-Ovn-Network-Id: 81c64238-e165-40c5-bca0-74045d48e1c2 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec 11 14:34:50 compute-0 podman[255284]: 2025-12-11 14:34:50.54220567 +0000 UTC m=+0.113034737 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 11 14:34:50 compute-0 podman[255283]: 2025-12-11 14:34:50.549268288 +0000 UTC m=+0.127940862 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 11 14:34:50 compute-0 podman[255285]: 2025-12-11 14:34:50.55399182 +0000 UTC m=+0.116389097 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 11 14:34:50 compute-0 podman[255286]: 2025-12-11 14:34:50.572610312 +0000 UTC m=+0.132000427 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 11 14:34:51 compute-0 nova_compute[189440]: 2025-12-11 14:34:51.323 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:52.532 106794 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:52.533 106794 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.7918782#033[00m
Dec 11 14:34:52 compute-0 haproxy-metadata-proxy-81c64238-e165-40c5-bca0-74045d48e1c2[254839]: 10.100.0.5:36128 [11/Dec/2025:14:34:49.738] listener listener/metadata 0/0/0/2795/2795 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:52.664 106794 DEBUG eventlet.wsgi.server [-] (106794) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:52.665 106794 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: Accept: */*#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: Connection: close#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: Content-Length: 100#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: Content-Type: application/x-www-form-urlencoded#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: Host: 169.254.169.254#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: User-Agent: curl/7.84.0#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: X-Forwarded-For: 10.100.0.5#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: X-Ovn-Network-Id: 81c64238-e165-40c5-bca0-74045d48e1c2#015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: #015
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:52.937 106794 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec 11 14:34:52 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:52.938 106794 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2730691#033[00m
Dec 11 14:34:52 compute-0 haproxy-metadata-proxy-81c64238-e165-40c5-bca0-74045d48e1c2[254839]: 10.100.0.5:42202 [11/Dec/2025:14:34:52.663] listener listener/metadata 0/0/0/275/275 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec 11 14:34:53 compute-0 nova_compute[189440]: 2025-12-11 14:34:53.782 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.484 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.485 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.485 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.486 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.486 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.489 189444 INFO nova.compute.manager [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Terminating instance#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.490 189444 DEBUG nova.compute.manager [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 11 14:34:55 compute-0 kernel: tap5867872c-9f (unregistering): left promiscuous mode
Dec 11 14:34:55 compute-0 NetworkManager[56353]: <info>  [1765463695.5260] device (tap5867872c-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 11 14:34:55 compute-0 ovn_controller[97832]: 2025-12-11T14:34:55Z|00127|binding|INFO|Releasing lport 5867872c-9fad-4f6d-bbe9-964f15daf5ad from this chassis (sb_readonly=0)
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.532 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:55 compute-0 ovn_controller[97832]: 2025-12-11T14:34:55Z|00128|binding|INFO|Setting lport 5867872c-9fad-4f6d-bbe9-964f15daf5ad down in Southbound
Dec 11 14:34:55 compute-0 ovn_controller[97832]: 2025-12-11T14:34:55Z|00129|binding|INFO|Removing iface tap5867872c-9f ovn-installed in OVS
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.543 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.549 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:a7:42 10.100.0.5'], port_security=['fa:16:3e:8c:a7:42 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '62bfa43b-7258-445f-b9e2-f93556312882', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81c64238-e165-40c5-bca0-74045d48e1c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9918c3b83e4146fb8f595fd50ea637fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c33d7ee-1d39-4b8c-84e0-7040a3de1e70 82b35f94-5b40-4cda-9cd3-cce23d5c35a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd46ae0b-db0e-4f35-bd94-b7354698fe8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>], logical_port=5867872c-9fad-4f6d-bbe9-964f15daf5ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5fb511f640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.551 106686 INFO neutron.agent.ovn.metadata.agent [-] Port 5867872c-9fad-4f6d-bbe9-964f15daf5ad in datapath 81c64238-e165-40c5-bca0-74045d48e1c2 unbound from our chassis#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.555 106686 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81c64238-e165-40c5-bca0-74045d48e1c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.557 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[68552813-b9f7-4986-b13d-cf4eb150efc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.558 106686 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2 namespace which is not needed anymore#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.561 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:55 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 11 14:34:55 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Consumed 40.725s CPU time.
Dec 11 14:34:55 compute-0 systemd-machined[155778]: Machine qemu-10-instance-00000009 terminated.
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.726 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.735 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.762 189444 INFO nova.virt.libvirt.driver [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Instance destroyed successfully.#033[00m
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.763 189444 DEBUG nova.objects.instance [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lazy-loading 'resources' on Instance uuid 62bfa43b-7258-445f-b9e2-f93556312882 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:34:55 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [NOTICE]   (254837) : haproxy version is 2.8.14-c23fe91
Dec 11 14:34:55 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [NOTICE]   (254837) : path to executable is /usr/sbin/haproxy
Dec 11 14:34:55 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [WARNING]  (254837) : Exiting Master process...
Dec 11 14:34:55 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [WARNING]  (254837) : Exiting Master process...
Dec 11 14:34:55 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [ALERT]    (254837) : Current worker (254839) exited with code 143 (Terminated)
Dec 11 14:34:55 compute-0 neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2[254833]: [WARNING]  (254837) : All workers exited. Exiting... (0)
Dec 11 14:34:55 compute-0 systemd[1]: libpod-1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963.scope: Deactivated successfully.
Dec 11 14:34:55 compute-0 podman[255382]: 2025-12-11 14:34:55.786411082 +0000 UTC m=+0.080011062 container died 1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 11 14:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963-userdata-shm.mount: Deactivated successfully.
Dec 11 14:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dba00a14914b6438b869e79095cb5f97dadeb4ade8384ec1a89da6bd609d7d3-merged.mount: Deactivated successfully.
Dec 11 14:34:55 compute-0 podman[255382]: 2025-12-11 14:34:55.85490235 +0000 UTC m=+0.148502330 container cleanup 1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 11 14:34:55 compute-0 systemd[1]: libpod-conmon-1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963.scope: Deactivated successfully.
Dec 11 14:34:55 compute-0 podman[255428]: 2025-12-11 14:34:55.964844842 +0000 UTC m=+0.073236601 container remove 1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.979 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[97e35eb8-ca2d-493c-b19d-43cebd8a1816]: (4, ('Thu Dec 11 02:34:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2 (1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963)\n1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963\nThu Dec 11 02:34:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2 (1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963)\n1bd3aacbfff73c55c88e6286df8769a08b3efb8c4a7b6a4d2022f1a75396c963\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.981 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[89ce46e2-5f1f-4405-ac61-211e29750137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:55 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:55.982 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c64238-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:34:55 compute-0 kernel: tap81c64238-e0: left promiscuous mode
Dec 11 14:34:55 compute-0 nova_compute[189440]: 2025-12-11 14:34:55.986 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.015 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:56.019 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[07758564-4676-4183-a53d-e4d3d43882b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.020 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:56.035 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[c43219ec-aec6-4159-b9f7-2cf84d997119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:56.037 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[da40c8dc-ef8e-459b-bf49-9c54fe838eb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:56.057 239832 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb35364-cd63-4ea3-8c57-c26b81cba1e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568043, 'reachable_time': 31851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255445, 'error': None, 'target': 'ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:56.062 106799 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81c64238-e165-40c5-bca0-74045d48e1c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 11 14:34:56 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:56.062 106799 DEBUG oslo.privsep.daemon [-] privsep: reply[5221ecb7-574e-42c4-b425-37a28673e0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 11 14:34:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d81c64238\x2de165\x2d40c5\x2dbca0\x2d74045d48e1c2.mount: Deactivated successfully.
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.328 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.629 189444 DEBUG nova.virt.libvirt.vif [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-11T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1014638578',display_name='tempest-TestServerBasicOps-server-1014638578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1014638578',id=9,image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7iyLoIHTWacJvoiouumlz6dlFkR5262yGsw865DcSUuDmeWwYsJQgYdwidpGvc0DIt6lJev8qlAifxnLSRhnk+65agwiuleoK2QPljsrWTbNmd08IEYLMA3e0FsQd0sA==',key_name='tempest-TestServerBasicOps-463530918',keypairs=<?>,launch_index=0,launched_at=2025-12-11T14:33:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9918c3b83e4146fb8f595fd50ea637fe',ramdisk_id='',reservation_id='r-p4xeylws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='64e29581-a774-4784-b0cb-b4428b3222f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-179046709',owner_user_name='tempest-TestServerBasicOps-179046709-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-11T14:34:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='66ecbf8a280a44f5b04c4f801fa62c4b',uuid=62bfa43b-7258-445f-b9e2-f93556312882,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.630 189444 DEBUG nova.network.os_vif_util [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Converting VIF {"id": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "address": "fa:16:3e:8c:a7:42", "network": {"id": "81c64238-e165-40c5-bca0-74045d48e1c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-870097525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9918c3b83e4146fb8f595fd50ea637fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5867872c-9f", "ovs_interfaceid": "5867872c-9fad-4f6d-bbe9-964f15daf5ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.631 189444 DEBUG nova.network.os_vif_util [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.632 189444 DEBUG os_vif [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.633 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.634 189444 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5867872c-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.635 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.638 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.639 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.641 189444 INFO os_vif [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:a7:42,bridge_name='br-int',has_traffic_filtering=True,id=5867872c-9fad-4f6d-bbe9-964f15daf5ad,network=Network(81c64238-e165-40c5-bca0-74045d48e1c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5867872c-9f')#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.642 189444 INFO nova.virt.libvirt.driver [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Deleting instance files /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882_del#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.643 189444 INFO nova.virt.libvirt.driver [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Deletion of /var/lib/nova/instances/62bfa43b-7258-445f-b9e2-f93556312882_del complete#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.724 189444 INFO nova.compute.manager [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.725 189444 DEBUG oslo.service.loopingcall [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.726 189444 DEBUG nova.compute.manager [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.726 189444 DEBUG nova.network.neutron [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.874 189444 DEBUG nova.compute.manager [req-e7995351-87eb-440e-a71c-371164d4942b req-951e8216-0d63-4ecd-9e74-fd5b524b8ea4 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-vif-unplugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.874 189444 DEBUG oslo_concurrency.lockutils [req-e7995351-87eb-440e-a71c-371164d4942b req-951e8216-0d63-4ecd-9e74-fd5b524b8ea4 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.875 189444 DEBUG oslo_concurrency.lockutils [req-e7995351-87eb-440e-a71c-371164d4942b req-951e8216-0d63-4ecd-9e74-fd5b524b8ea4 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.876 189444 DEBUG oslo_concurrency.lockutils [req-e7995351-87eb-440e-a71c-371164d4942b req-951e8216-0d63-4ecd-9e74-fd5b524b8ea4 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.876 189444 DEBUG nova.compute.manager [req-e7995351-87eb-440e-a71c-371164d4942b req-951e8216-0d63-4ecd-9e74-fd5b524b8ea4 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] No waiting events found dispatching network-vif-unplugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:34:56 compute-0 nova_compute[189440]: 2025-12-11 14:34:56.877 189444 DEBUG nova.compute.manager [req-e7995351-87eb-440e-a71c-371164d4942b req-951e8216-0d63-4ecd-9e74-fd5b524b8ea4 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-vif-unplugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 11 14:34:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:57.006 106686 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:14:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:87:69:a6:ee:c9'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 11 14:34:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:57.007 106686 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 11 14:34:57 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:34:57.008 106686 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91d1351c-e9c8-4a9c-80fe-965b575ecbf6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 11 14:34:57 compute-0 nova_compute[189440]: 2025-12-11 14:34:57.014 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:34:57 compute-0 nova_compute[189440]: 2025-12-11 14:34:57.771 189444 DEBUG nova.network.neutron [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:34:57 compute-0 nova_compute[189440]: 2025-12-11 14:34:57.795 189444 INFO nova.compute.manager [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Took 1.07 seconds to deallocate network for instance.#033[00m
Dec 11 14:34:57 compute-0 nova_compute[189440]: 2025-12-11 14:34:57.852 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:57 compute-0 nova_compute[189440]: 2025-12-11 14:34:57.853 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:57 compute-0 nova_compute[189440]: 2025-12-11 14:34:57.888 189444 DEBUG nova.compute.manager [req-fa93ac47-262c-4fb6-8822-94429eff166b req-094ed5fd-190d-4ed5-8963-b4ac4cdaded3 a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-vif-deleted-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.149 189444 DEBUG nova.compute.provider_tree [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.171 189444 DEBUG nova.scheduler.client.report [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.200 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.229 189444 INFO nova.scheduler.client.report [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Deleted allocations for instance 62bfa43b-7258-445f-b9e2-f93556312882#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.303 189444 DEBUG oslo_concurrency.lockutils [None req-1edb006f-5e48-4a93-9c72-32ba9a1c78f6 66ecbf8a280a44f5b04c4f801fa62c4b 9918c3b83e4146fb8f595fd50ea637fe - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:58 compute-0 podman[255447]: 2025-12-11 14:34:58.628719967 +0000 UTC m=+0.214122928 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.972 189444 DEBUG nova.compute.manager [req-ef61a5c7-c617-48f8-9e84-9e1ee77a1768 req-160ba260-7c98-48f8-b77c-b11652e53e2c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.972 189444 DEBUG oslo_concurrency.lockutils [req-ef61a5c7-c617-48f8-9e84-9e1ee77a1768 req-160ba260-7c98-48f8-b77c-b11652e53e2c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Acquiring lock "62bfa43b-7258-445f-b9e2-f93556312882-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.973 189444 DEBUG oslo_concurrency.lockutils [req-ef61a5c7-c617-48f8-9e84-9e1ee77a1768 req-160ba260-7c98-48f8-b77c-b11652e53e2c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.973 189444 DEBUG oslo_concurrency.lockutils [req-ef61a5c7-c617-48f8-9e84-9e1ee77a1768 req-160ba260-7c98-48f8-b77c-b11652e53e2c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] Lock "62bfa43b-7258-445f-b9e2-f93556312882-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.973 189444 DEBUG nova.compute.manager [req-ef61a5c7-c617-48f8-9e84-9e1ee77a1768 req-160ba260-7c98-48f8-b77c-b11652e53e2c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] No waiting events found dispatching network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 11 14:34:58 compute-0 nova_compute[189440]: 2025-12-11 14:34:58.974 189444 WARNING nova.compute.manager [req-ef61a5c7-c617-48f8-9e84-9e1ee77a1768 req-160ba260-7c98-48f8-b77c-b11652e53e2c a9d0b3136ebc4cb09707e14ceaec6df9 097cf560532642cabedf2f26a3257dec - - default default] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Received unexpected event network-vif-plugged-5867872c-9fad-4f6d-bbe9-964f15daf5ad for instance with vm_state deleted and task_state None.#033[00m
Dec 11 14:34:59 compute-0 podman[203650]: time="2025-12-11T14:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:34:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:34:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5266 "" "Go-http-client/1.1"
Dec 11 14:35:01 compute-0 nova_compute[189440]: 2025-12-11 14:35:01.333 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: ERROR   14:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: ERROR   14:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: ERROR   14:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: ERROR   14:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: ERROR   14:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:35:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:35:01 compute-0 podman[255472]: 2025-12-11 14:35:01.529130041 +0000 UTC m=+0.105109912 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Dec 11 14:35:01 compute-0 podman[255473]: 2025-12-11 14:35:01.579260976 +0000 UTC m=+0.130242991 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:35:01 compute-0 nova_compute[189440]: 2025-12-11 14:35:01.636 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:35:04.116 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:35:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:35:04.117 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:35:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:35:04.118 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:35:06 compute-0 nova_compute[189440]: 2025-12-11 14:35:06.336 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:06 compute-0 ovn_controller[97832]: 2025-12-11T14:35:06Z|00130|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:35:06 compute-0 ovn_controller[97832]: 2025-12-11T14:35:06Z|00131|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:35:06 compute-0 nova_compute[189440]: 2025-12-11 14:35:06.420 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:06 compute-0 nova_compute[189440]: 2025-12-11 14:35:06.639 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:06 compute-0 ovn_controller[97832]: 2025-12-11T14:35:06Z|00132|binding|INFO|Releasing lport af28a710-cfbd-404b-b1d5-5903ce1a6b8c from this chassis (sb_readonly=0)
Dec 11 14:35:06 compute-0 ovn_controller[97832]: 2025-12-11T14:35:06Z|00133|binding|INFO|Releasing lport 33f7bdab-616d-48cf-a80b-a3a17467ce09 from this chassis (sb_readonly=0)
Dec 11 14:35:06 compute-0 nova_compute[189440]: 2025-12-11 14:35:06.691 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:10 compute-0 nova_compute[189440]: 2025-12-11 14:35:10.760 189444 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765463695.7590113, 62bfa43b-7258-445f-b9e2-f93556312882 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 11 14:35:10 compute-0 nova_compute[189440]: 2025-12-11 14:35:10.761 189444 INFO nova.compute.manager [-] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] VM Stopped (Lifecycle Event)#033[00m
Dec 11 14:35:10 compute-0 nova_compute[189440]: 2025-12-11 14:35:10.782 189444 DEBUG nova.compute.manager [None req-769ea5ee-1bea-465b-8ceb-373bd1a41754 - - - - - -] [instance: 62bfa43b-7258-445f-b9e2-f93556312882] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 11 14:35:11 compute-0 nova_compute[189440]: 2025-12-11 14:35:11.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:11 compute-0 nova_compute[189440]: 2025-12-11 14:35:11.339 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:11 compute-0 nova_compute[189440]: 2025-12-11 14:35:11.644 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:14 compute-0 podman[255515]: 2025-12-11 14:35:14.829044147 +0000 UTC m=+0.132461286 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 11 14:35:14 compute-0 podman[255516]: 2025-12-11 14:35:14.834432188 +0000 UTC m=+0.116442190 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 11 14:35:15 compute-0 nova_compute[189440]: 2025-12-11 14:35:15.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:15 compute-0 nova_compute[189440]: 2025-12-11 14:35:15.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:35:16 compute-0 nova_compute[189440]: 2025-12-11 14:35:16.343 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:16 compute-0 nova_compute[189440]: 2025-12-11 14:35:16.648 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:18 compute-0 nova_compute[189440]: 2025-12-11 14:35:18.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:18 compute-0 nova_compute[189440]: 2025-12-11 14:35:18.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:21 compute-0 nova_compute[189440]: 2025-12-11 14:35:21.347 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:21 compute-0 podman[255557]: 2025-12-11 14:35:21.528995189 +0000 UTC m=+0.123513284 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 11 14:35:21 compute-0 podman[255560]: 2025-12-11 14:35:21.532725801 +0000 UTC m=+0.102717452 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:35:21 compute-0 podman[255559]: 2025-12-11 14:35:21.546297315 +0000 UTC m=+0.123049723 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 11 14:35:21 compute-0 podman[255558]: 2025-12-11 14:35:21.5635575 +0000 UTC m=+0.137444967 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, config_id=edpm, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 11 14:35:21 compute-0 nova_compute[189440]: 2025-12-11 14:35:21.651 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.231 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.236 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.582 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.583 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.583 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:35:22 compute-0 nova_compute[189440]: 2025-12-11 14:35:22.584 189444 DEBUG nova.objects.instance [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64b46b2-b462-4f18-99a0-33cce11b70c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.350 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.641 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updating instance_info_cache with network_info: [{"id": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "address": "fa:16:3e:f3:ef:3e", "network": {"id": "8a57e9b6-2caa-4fc2-90ee-ef2f688d63c0", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2142628490-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16cfe265641045f6adca23a64917736e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38f9dcea-bf", "ovs_interfaceid": "38f9dcea-bf59-4044-812a-7bf30f595c5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.653 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.677 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-f64b46b2-b462-4f18-99a0-33cce11b70c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.678 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: f64b46b2-b462-4f18-99a0-33cce11b70c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.679 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.680 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.719 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.720 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.720 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.721 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.822 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.907 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:35:26 compute-0 nova_compute[189440]: 2025-12-11 14:35:26.909 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.009 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.019 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.096 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.098 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.161 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.573 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.575 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4881MB free_disk=72.26900100708008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.576 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.576 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.710 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.711 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.712 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.712 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.909 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.932 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.973 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:35:27 compute-0 nova_compute[189440]: 2025-12-11 14:35:27.974 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:35:29 compute-0 nova_compute[189440]: 2025-12-11 14:35:29.530 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:29 compute-0 podman[255639]: 2025-12-11 14:35:29.587525415 +0000 UTC m=+0.176700746 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 11 14:35:29 compute-0 podman[203650]: time="2025-12-11T14:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:35:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:35:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5270 "" "Go-http-client/1.1"
Dec 11 14:35:30 compute-0 nova_compute[189440]: 2025-12-11 14:35:30.230 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:35:31 compute-0 nova_compute[189440]: 2025-12-11 14:35:31.354 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: ERROR   14:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: ERROR   14:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: ERROR   14:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: ERROR   14:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: ERROR   14:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:35:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:35:31 compute-0 nova_compute[189440]: 2025-12-11 14:35:31.657 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:32 compute-0 podman[255664]: 2025-12-11 14:35:32.503517651 +0000 UTC m=+0.098425952 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Dec 11 14:35:32 compute-0 podman[255665]: 2025-12-11 14:35:32.511550211 +0000 UTC m=+0.089631303 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 11 14:35:36 compute-0 nova_compute[189440]: 2025-12-11 14:35:36.358 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:36 compute-0 nova_compute[189440]: 2025-12-11 14:35:36.660 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 11 14:35:41 compute-0 nova_compute[189440]: 2025-12-11 14:35:41.360 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:41 compute-0 nova_compute[189440]: 2025-12-11 14:35:41.663 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.994 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.994 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f3e9e113fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:42 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:42.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3e9dd02510>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.000 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b112e8a-c27d-4b2e-91fc-81552a0cd4ee', 'name': 'tempest-AttachInterfacesUnderV243Test-server-29252937', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b0f7c7a5f01c4c7a9fd2fa3668dcd463', 'user_id': 'a714564f83e74b39aa33b964e9913421', 'hostId': '5dbf343690864d1983c881e8bc082672162e288a5198d8460c1b4743', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.003 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64b46b2-b462-4f18-99a0-33cce11b70c3', 'name': 'tempest-ServerAddressesTestJSON-server-1930571022', 'flavor': {'id': '639c6f85-2c0f-4003-98b6-94c63eeb9fc7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '64e29581-a774-4784-b0cb-b4428b3222f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16cfe265641045f6adca23a64917736e', 'user_id': '719b5c4df50d474091f6f471803c8a13', 'hostId': '2fcddfdd3b298ab69316782a145f6113cf5f677ad9bc894793473b66', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.003 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-11T14:35:43.004155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.008 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.012 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3e9e111940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.013 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f296090>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-11T14:35:43.013968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.042 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/cpu volume: 39800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.068 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/cpu volume: 40900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3ea0f907d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.069 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9f4078c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-11T14:35:43.070078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.085 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.085 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.100 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.101 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f3e9e1a80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.102 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.102 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f3e9e1138c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-11T14:35:43.102183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1138f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.103 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/memory.usage volume: 46.4921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-11T14:35:43.103672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/memory.usage volume: 41.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f3e9e113920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.105 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.105 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3e9e1a8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-11T14:35:43.104993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f3e9e1a81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-11T14:35:43.106350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f3e9e1a8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-11T14:35:43.107520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f3e9e1a82f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-11T14:35:43.108617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f3ea207c830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1133e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.109 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.110 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.110 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.110 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f3e9e113410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-11T14:35:43.109738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-11T14:35:43.111317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.146 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 509451213 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.147 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.latency volume: 51551775 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.191 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 715818456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.192 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.latency volume: 141083317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f3e9e113470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.193 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1134a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.193 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-11T14:35:43.193472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.194 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.194 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.194 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f3e9e1134d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.195 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-11T14:35:43.195513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.196 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.196 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.196 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f3e9e113530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 73060352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-11T14:35:43.197591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.198 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.198 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 73019392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.198 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f3e9e113590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1135c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 4383891649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-11T14:35:43.199595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.200 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.200 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 10586132488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.200 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3e9e1a8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1a85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.201 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.202 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f3e9e1135f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.203 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.203 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-11T14:35:43.201535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-11T14:35:43.203100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.203 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.204 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f3e9e113980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.205 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.205 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f3e9e113c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.206 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f3e9e113650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-11T14:35:43.205368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f3e9e113e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.208 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.208 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-11T14:35:43.207235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-11T14:35:43.208426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f3e9e1136b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e1136e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f3e9e113ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.211 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-11T14:35:43.210074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.211 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f3e9e113f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3e9e113f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-11T14:35:43.211397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.213 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.213 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f3e9e113320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3e9f2bfce0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.214 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.214 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3ea1743fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.214 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.214 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.214 14 DEBUG ceilometer.compute.pollsters [-] 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.215 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.215 14 DEBUG ceilometer.compute.pollsters [-] f64b46b2-b462-4f18-99a0-33cce11b70c3/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-11T14:35:43.212930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-11T14:35:43.214398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:43 compute-0 ceilometer_agent_compute[200203]: 2025-12-11 14:35:43.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 11 14:35:45 compute-0 ovn_controller[97832]: 2025-12-11T14:35:45Z|00134|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Dec 11 14:35:45 compute-0 podman[255709]: 2025-12-11 14:35:45.510959326 +0000 UTC m=+0.095672487 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 11 14:35:45 compute-0 podman[255710]: 2025-12-11 14:35:45.515455522 +0000 UTC m=+0.101510855 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:35:46 compute-0 nova_compute[189440]: 2025-12-11 14:35:46.363 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:46 compute-0 nova_compute[189440]: 2025-12-11 14:35:46.666 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:51 compute-0 nova_compute[189440]: 2025-12-11 14:35:51.367 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:51 compute-0 nova_compute[189440]: 2025-12-11 14:35:51.670 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:52 compute-0 podman[255753]: 2025-12-11 14:35:52.481534267 +0000 UTC m=+0.082473693 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 11 14:35:52 compute-0 podman[255761]: 2025-12-11 14:35:52.493219914 +0000 UTC m=+0.074612727 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4)
Dec 11 14:35:52 compute-0 podman[255755]: 2025-12-11 14:35:52.524093936 +0000 UTC m=+0.105717776 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 11 14:35:52 compute-0 podman[255754]: 2025-12-11 14:35:52.533107119 +0000 UTC m=+0.118937398 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, vcs-type=git, version=9.4, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=)
Dec 11 14:35:56 compute-0 nova_compute[189440]: 2025-12-11 14:35:56.370 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:56 compute-0 nova_compute[189440]: 2025-12-11 14:35:56.673 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:35:59 compute-0 podman[203650]: time="2025-12-11T14:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:35:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:35:59 compute-0 podman[203650]: @ - - [11/Dec/2025:14:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5273 "" "Go-http-client/1.1"
Dec 11 14:36:00 compute-0 podman[255828]: 2025-12-11 14:36:00.518026349 +0000 UTC m=+0.110590531 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 11 14:36:01 compute-0 nova_compute[189440]: 2025-12-11 14:36:01.371 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: ERROR   14:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: ERROR   14:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: ERROR   14:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: ERROR   14:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: ERROR   14:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:36:01 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:36:01 compute-0 nova_compute[189440]: 2025-12-11 14:36:01.677 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:03 compute-0 podman[255852]: 2025-12-11 14:36:03.513852634 +0000 UTC m=+0.102848927 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=)
Dec 11 14:36:03 compute-0 podman[255853]: 2025-12-11 14:36:03.536766087 +0000 UTC m=+0.130344149 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 14:36:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:36:04.118 106686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:36:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:36:04.118 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:36:04 compute-0 ovn_metadata_agent[106681]: 2025-12-11 14:36:04.119 106686 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:36:06 compute-0 nova_compute[189440]: 2025-12-11 14:36:06.374 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:06 compute-0 nova_compute[189440]: 2025-12-11 14:36:06.681 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:11 compute-0 nova_compute[189440]: 2025-12-11 14:36:11.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:11 compute-0 nova_compute[189440]: 2025-12-11 14:36:11.378 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:11 compute-0 nova_compute[189440]: 2025-12-11 14:36:11.683 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:15 compute-0 nova_compute[189440]: 2025-12-11 14:36:15.234 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:15 compute-0 nova_compute[189440]: 2025-12-11 14:36:15.235 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 11 14:36:16 compute-0 nova_compute[189440]: 2025-12-11 14:36:16.381 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:16 compute-0 podman[255895]: 2025-12-11 14:36:16.53466075 +0000 UTC m=+0.116021149 container health_status 4cd8cae733cd89e5f199ba07a71552573f35e4951a5279485148128ca17dccfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 11 14:36:16 compute-0 podman[255896]: 2025-12-11 14:36:16.560922771 +0000 UTC m=+0.142294490 container health_status 6b374344dbc2342c998d8cb863fe51bf91b0490e777a1304ad7230d5bfc2895f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 11 14:36:16 compute-0 nova_compute[189440]: 2025-12-11 14:36:16.686 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:17 compute-0 systemd-logind[786]: New session 31 of user zuul.
Dec 11 14:36:17 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec 11 14:36:18 compute-0 nova_compute[189440]: 2025-12-11 14:36:18.236 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:19 compute-0 nova_compute[189440]: 2025-12-11 14:36:19.235 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:21 compute-0 nova_compute[189440]: 2025-12-11 14:36:21.385 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:21 compute-0 nova_compute[189440]: 2025-12-11 14:36:21.689 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:22 compute-0 nova_compute[189440]: 2025-12-11 14:36:22.231 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:22 compute-0 nova_compute[189440]: 2025-12-11 14:36:22.233 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:22 compute-0 nova_compute[189440]: 2025-12-11 14:36:22.234 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 11 14:36:22 compute-0 nova_compute[189440]: 2025-12-11 14:36:22.590 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 11 14:36:22 compute-0 nova_compute[189440]: 2025-12-11 14:36:22.590 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquired lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 11 14:36:22 compute-0 nova_compute[189440]: 2025-12-11 14:36:22.591 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 11 14:36:23 compute-0 podman[256118]: 2025-12-11 14:36:23.534477458 +0000 UTC m=+0.115809244 container health_status 72431e7b01077f168b1d1aa3c7348ae6e4af47d9a489fedb2d36beec3738406a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, io.buildah.version=1.29.0, container_name=kepler, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 11 14:36:23 compute-0 podman[256116]: 2025-12-11 14:36:23.537887439 +0000 UTC m=+0.117327499 container health_status 11c5d710f56033599275da6a58c594bbab81c410e23905b51ec469dce04c59ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 11 14:36:23 compute-0 podman[256122]: 2025-12-11 14:36:23.550034757 +0000 UTC m=+0.114167045 container health_status ea9e0dc347a8be5dda69413aa4190c84070cbf3aec4b689fc90fb3eb4597efc3 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251210, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 11 14:36:23 compute-0 podman[256120]: 2025-12-11 14:36:23.559192194 +0000 UTC m=+0.115564899 container health_status a7cc265c7fa1e5e70c97644808d537127a8270d1d128962008d7fa451fb4e3bc (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Dec 11 14:36:26 compute-0 ovs-vsctl[256236]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.388 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.692 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.849 189444 DEBUG nova.network.neutron [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updating instance_info_cache with network_info: [{"id": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "address": "fa:16:3e:d2:1f:b8", "network": {"id": "3a7879e9-5e69-43df-aeae-21ce102a3b8a", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-980185420-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0f7c7a5f01c4c7a9fd2fa3668dcd463", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6427f2b4-25", "ovs_interfaceid": "6427f2b4-25ae-460a-8ade-54b5aba9dff6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.898 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Releasing lock "refresh_cache-1b112e8a-c27d-4b2e-91fc-81552a0cd4ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.899 189444 DEBUG nova.compute.manager [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] [instance: 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.900 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.901 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.935 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.935 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.936 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:36:26 compute-0 nova_compute[189440]: 2025-12-11 14:36:26.937 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 11 14:36:26 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 255964 (sos)
Dec 11 14:36:27 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 11 14:36:27 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.085 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.163 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.165 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.254 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b112e8a-c27d-4b2e-91fc-81552a0cd4ee/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.266 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.341 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.344 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.435 189444 DEBUG oslo_concurrency.processutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64b46b2-b462-4f18-99a0-33cce11b70c3/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 11 14:36:27 compute-0 virtqemud[189338]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 11 14:36:27 compute-0 virtqemud[189338]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 11 14:36:27 compute-0 virtqemud[189338]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 11 14:36:27 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.997 189444 WARNING nova.virt.libvirt.driver [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:27.999 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4781MB free_disk=72.26856231689453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.000 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.000 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.115 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance f64b46b2-b462-4f18-99a0-33cce11b70c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.116 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Instance 1b112e8a-c27d-4b2e-91fc-81552a0cd4ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.116 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.116 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.209 189444 DEBUG nova.compute.provider_tree [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed in ProviderTree for provider: 1bda6308-729f-4919-a8ba-89570b8721fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.230 189444 DEBUG nova.scheduler.client.report [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Inventory has not changed for provider 1bda6308-729f-4919-a8ba-89570b8721fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.232 189444 DEBUG nova.compute.resource_tracker [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 11 14:36:28 compute-0 nova_compute[189440]: 2025-12-11 14:36:28.232 189444 DEBUG oslo_concurrency.lockutils [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 11 14:36:29 compute-0 nova_compute[189440]: 2025-12-11 14:36:29.566 189444 DEBUG oslo_service.periodic_task [None req-58f3bc5f-9208-4a1e-bcee-fc7478238c62 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 11 14:36:29 compute-0 podman[203650]: time="2025-12-11T14:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 11 14:36:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Dec 11 14:36:29 compute-0 podman[203650]: @ - - [11/Dec/2025:14:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5270 "" "Go-http-client/1.1"
Dec 11 14:36:31 compute-0 nova_compute[189440]: 2025-12-11 14:36:31.389 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: ERROR   14:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: ERROR   14:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: ERROR   14:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: ERROR   14:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: ERROR   14:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 11 14:36:31 compute-0 openstack_network_exporter[205834]: 
Dec 11 14:36:31 compute-0 podman[256766]: 2025-12-11 14:36:31.541595188 +0000 UTC m=+0.139484225 container health_status 8f7805268079821f08cb8d6a6f0dc1c8e027a0f4b2d6dd5c5155495b0964ef9e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 11 14:36:31 compute-0 nova_compute[189440]: 2025-12-11 14:36:31.697 189444 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 11 14:36:32 compute-0 systemd[1]: Starting Hostname Service...
Dec 11 14:36:32 compute-0 systemd[1]: Started Hostname Service.
Dec 11 14:36:34 compute-0 podman[256968]: 2025-12-11 14:36:34.518594868 +0000 UTC m=+0.105247723 container health_status 8fbfd84ce1474c2e1547df207f38f71f10d2a2c4aea29d2e0a737b13fce0f5be (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 11 14:36:34 compute-0 podman[256966]: 2025-12-11 14:36:34.537006144 +0000 UTC m=+0.129398715 container health_status 39beded1a38ced4fb9cb5ee4654a248f2e0e57a0b64a3fa8d0912094bbf0ec73 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
